Author: Curator
The AI Doc is an overwrought hype piece for doomers and accelerationists alike
We are in the thick of a massive push to incorporate generative AI into almost every aspect of our lives, but it is still easy to be confused about what it is and how it works. It doesn’t help that many of gen AI’s proponents and detractors both speak about it with a feverish hyperbole […]
Nintendo is suing the US government for a refund of Trump’s illegal tariffs
Nintendo of America is suing the US government over President Trump’s tariffs and is demanding a “prompt refund, with interest” of any duties that it has paid, according to a complaint filed in the US Court of International Trade, as reported previously by Aftermath. The Supreme Court ruled last month that Trump’s use of the […]
87% of failed biometric verifications in Southern Africa due to AI spoofing: Smile ID

A new report spotlights deepfake fraud posing an acute problem for Africa.
Digital identity, banking and e-government are being used to streamline and more efficiently facilitate financial inclusion and disbursement of funding, along with helping underserved communities access healthcare and other essential public services.
Smile ID’s 2026 Digital Identity Fraud Report has some jaw-dropping findings. In Southern Africa, almost nine in ten (87 percent) rejected biometric verification attempts were connected to AI-assisted impersonation and spoofing. The report says “fraud is overwhelmingly biometric in Southern Africa,” a region that encompasses countries including Botswana, South Africa and Zimbabwe.
Meanwhile, Africa’s percentage of adults owning a financial account has risen from 34 percent to nearly 60 percent over the past decade. However, identity verification systems have largely stood still — tied to a one-time checkpoint model, Smile ID warns. Fraud has accelerated with the arrival of AI.
The figures were compiled from 200 million identity checks by Smile ID’s customer base across dozens of industries and 35 countries in 2025. The analysis covers the full identity lifecycle — onboarding, authentication, and high-risk account events — examining how fraud manifests at different stages of trust.
Smile ID found more than 160,000 fraudulent verification attempts in a single month in 2025, all of which were traced back to just 100 facial identities. “Some of these faces appeared over 12,000 times across multiple platforms,” the report says. Another case saw attackers use the same identity for more than a thousand account registration attempts within a space of 30 minutes.
“The most consequential fraud attacks today are targeted account takeovers (ATOs) — not fake IDs or isolated spoofs, but coordinated operations that compromise the capture pipeline, reuse real identities at scale, and exploit moments after approval when controls are lighter through highly scalable AI-powered tooling,” the reports claims.
This is a professionalized process with fraudsters coming in later in the customer journey, often colluding with insiders, and making use of large facial biometric and identity data sets. AI-powered tools are employed to analyze the data and to scale attacks. Generative AI has lowered the barriers to entry, reducing costs; creating high-quality synthetic documents and imagery while automating biometric manipulation, when this was previously uncommon or costly.
Now the cost of each try is marginal — approaching zero — attackers can reuse the same identity assets across hundreds of thousands attempts. Defenses built for a previous era are straining under the barrage. “Fraud defences must now assume abundance and use networked intelligence to spot patterns and turn the volume generated by fraudsters’ attacks against them,” the Smile ID report argues.
Smile ID discovered that nearly 90 percent of verifications rejected for suspected fraud in 2025 were found to be using mobile SDK integrations. This was up from 15 percent in 2023 and 65 percent in 2024. Mobile SDKs can capture additional on-device signals, such as image integrity and user behavior, that API-only verification flows cannot see. Biometric injection attacks have surged to over 100,000 per month, with Smile ID detecting the shadow of emulators, tampered capture and virtual cameras.
Continuously on defense and network intelligence
Mark Straub, CEO of Smile ID, comments that defense has to move beyond just the end of the pipeline. “Fraud is no longer a ‘KYC’ problem — it is a continuous cybersecurity challenge,” he says.
“Effective defence now requires network intelligence: By leveraging these privacy-preserving indicators throughout the customer lifecycle, we enable real-time adaptation. Identity has entered the security era, where eco-system wide protection is essential to safeguarding the individual,” he believes.
Modern fraud defense should operate across four interconnected zones, Smile ID argues, which form a continuous security infrastructure. These are trusted capture; verification and signal extraction; enforcement and feedback; intelligence and pattern detection, which all flow into another. Three strategic priorities build on this further.
Of these, priority two — harden authentication at high-value moments — is perhaps notable for its granular detail. For example, multi-factor authentication at high-risk moments, which in practical terms would mean requiring biometric verification in addition to OTP for password resets or device changes or high-value transactions.
The other two priorities are lifecycle intelligence, revealing where fraud will concentrate, and trusted capture, with capture integrity enabling richer signals. “Fraud now operates as repeatable, networked infrastructure,” the report concludes. “Defence must do the same.”
“This approach — a Network Defence — connects signals across the identity lifecycle, detects coordination that isolated systems miss, and strengthens with every verification.”
Smile ID’s 2026 Digital Identity Fraud in Africa Report can be downloaded here.
California’s OS-based age verification law challenges open-source community

California’s new online safety bill, AB 1043 (the Digital Age Assurance Act), adopts a declared age model for operating systems. Under the law, which is set to take effect on January 1, 2027, when a user sets up a new device, the operating system is required to ask for their age or date of birth. This declared age will be used to curate what’s available on the app store, and can be shared with developers on request to ensure age-appropriate experiences.
An article in PC Gamer points out that this “sounds incompatible with many of today’s open source software, including Linux.” The open source community is wrestling with the problem of how to comply with the laws while also not violating core privacy principles.
The piece muses on technical solutions, quoting Jef Spaleta, project leader for popular Linux distribution, The Fedora Project, who says “this might be as simple as extending how we currently map uid to usernames and group membership and having a new file in /etc/ that keeps up with age.”
Or, “it might be as simple as that and we extend the administrative cli and gui tools to populate that file as part of account creation. That might be simplest and it solves the problem for the full ecosystem of Linux OSes. Then applications just have to start choosing to look at the file.” To Spaleta, this suggests a D-Bus Service, which allows communication between programs.
Ubuntu, another Linux distribution, is also unsure of how to respond, and says it is consulting with its lawyers before making a plan.
California age law does not compute with DB48X
The point is, in putting the onus on operating systems to collect age data, AB 1043 is causing headaches for open source nerds. Both California’s bill and a like-minded bill in Colorado, SB26-051, have drawn the ire of the creators of an open source calculator, DB48X, described as “a project to rebuild and improve upon the ‘legendary’ HP48 family of calculators and RPL programming language, and for modding newer calculators to utilise it.”
Rather than comply, DB48X has opted to restrict access for Californians and Coloradans when (and, in Colorado’s case, if) their laws come into effect. A legal-notice file for the project says “DB48X is probably an operating system under these laws. However, it does not, cannot and will not implement age verification.”
Per PC Gamer, “you know you’ve messed up when you’ve angered the math lot.”
The calculator guys are not alone. Ground News has a roundup of articles expressing variations of grievance. WebProNews says California’s law “forces a surveillance mandate on every developer – including those who can’t comply.” The Daily Economy says “California is embedding age verification directly into digital devices. For those of us concerned with personal liberties, this is an emergency.”
No verification required, actually
PC Gamer also notes the challenges of enforcing a law that means “the job of checking whether people have installed its OS falls onto Californian authorities to deal with.”
“Both Californian and Coloradan bills set out civil fines of $2,500 for unintentional breaches and $7,500 for intentional breaches, but how would the majority of breaches be discovered in the first place?”
Another criticism asks why California does not specify what level or extent of age verification it requires. If it’s just a date of birth, Spaleta says, “a simple dropdown interface may suffice,” meaning “the effectiveness of such a system appears to be based on an honour system.” Self-declaration at the root negates the entire process; this would-be age verification law, in fact, does not mandate age verification at all. Technically, it’s not even age assurance.
California’s law is less than a year away from taking effect, and Colorado’s bill (which more properly labels its goal “age attestation”), if passed, would take effect January 1, 2028. Ironically, the piece ends up lamenting the speed at which new technology is becoming normalized: the laws, it says, are “coming at a time when age verification is being rolled out more widely across the globe and facing stern criticism, such as an open letter from scientists and researchers that notes the many pitfalls of ill-thought-out verification methods.”
The letter in question has provided a common reference for those opposed to age assurance laws and technologies for various reasons. The open source community now joins social media tycoons, privacy advocates and pornographers in opposing such laws, which they say are invasive and dangerous – but which lawmakers insist parents are asking for, as they work to find the right legal model.
The Trump administration says it can’t process tariff refunds because of computer problems
The US Customs and Border Protection says it currently can’t comply with an order to process billions of dollars in refunds stemming from tariffs imposed by President Donald Trump. In a filing on Friday, CBP executive director Brandon Lord says the agency’s digital import processing system is “not well suited to a task of this […]


































