Category: Biometrics
One check to rule them all: quest for reusable age assurance to ramp up in 2026

At first, the idea of reusable identity might sound redundant. After all, what is identity if not a stable set of characteristics that can be referred to as needed to prove one is who they say they are? In the quest to harness digital identity, however, reusability has a more specific, technical meaning – even more so when it comes to age assurance.
While there are varying models, the core idea of a reusable age check is that you only have to verify your age once, and can then apply proof of that single verification across different platforms and services. As such, it is closely related to interoperability, but there are subtle differences. Re-use allows the user to rely on a check completed with the provider of that check on another website. Interoperability allows the user to rely on a check completed with multiple providers, operating independently for other websites. Some systems use digital tokens that live on a user’s device. Others use systems based on the passkey model of digital encryption. The technology is still fresh and the market open, even as standards and testing begin to shape its boundaries.
Several options have already emerged, but as legislation matures in some places and takes root in others, the field is sure to grow. The various factors shaping the field go beyond legislation to encompass payment models for providers, sociocultural attitudes toward pornography and centralized government, privacy and surveillance concerns, youth mental health, and more. In the short to medium term, being able to prove your age once and use that proof repeatedly is going to come in very handy.
Two major reusable age check projects circle one another
Among those currently proffering reusable age verification technology are a smattering of biometrics firms and a couple of initiatives tied to networks of providers. Lithuania-based Ondato has offered its product, OnAge, since 2024. UK firm Yoti offers its Yoti Keys, which are proof-of-age tokens that live on a browser. Portuguese firm AgeVerif claims to offer reusable age checks certified against IEEE 2089.1-2024 by the Age Check Certification Scheme (ACCS). French company Needemand has combined gestural biometrics with a PIN code for its reusable model, BorderAge.
In an email to Biometric Update, Jean Michel Polit, chief business officer for Needemand, says BorderAge “features a unique, proprietary ZKP PIN code technology that allows the user to re-use the result of the initial age verification with hand movements and that ‘survives’ private browsing without using any personal data.”
“This means that to be able to re-use the initial age verification a user will not have to create an account with us, or download an app, even in private browsing. This is a major benefit, since forcing users to create an account or download an App means adding friction that can turn them away.”
The last quarter of 2025 saw the launch of two new initiatives tied to existing entities and networked models. Another French firm, Opale.io, invented a system it calls AgeKey, which is based on passkey technology and leverages on-device biometrics for authentication. The company was acquired by k-ID, the Singapore-headquartered firm that offers automated compliance and age controls. Subsequently, the combined entity launched OpenAge, a reusable proof of age system based on the AgeKey model, which allows users to prove their age simply by unlocking their screen.
A day or so after OpenAge launched, a project gestated by the euCONSENT ASBL consortium also went live to offer reusable age checks. AgeAware is a “standards-based, anonymised, interoperable age verification network” that “allows global providers of age assurance technologies to recognise one another’s age checks, under the governance of euCONSENT.” The euCONSENT project is technically a non-profit organization, but its members include providers Yoti, AgeChecked and VerifyMy, as well as the Age Verification Providers Association.
On launch, there was a degree of tension between the two entities, over fears that toes might get stepped on. The recent announcement that Meta has signed on to OpenAge suggests it has the necessary momentum to become a go-to solution.
However, following the launch announcement, clarifications ensued, and the result appears to be moving in the direction of a solution that factors in everybody – independent age verification and age estimation providers that don’t want to get boxed out of a dominant system, k-ID and its AgeKeys, the advocacy bodies that want to make sure everyone is getting paid fairly for services rendered, and the regulatory bodies whose policies prompted all this to begin with.
The Age Verification Providers Association (AVPA), the industry body representing private age verification, age estimation and age inference providers, has framed interoperability as a fundamental issue for age checks.
“We recognised early on that to compete against both government-issued digital ID and BigTech, we would need to be at least as convenient as them,” says Iain Corby, executive director of AVPA, in comments to Biometric Update. “We welcome the innovation that is now deploying new options for interoperability, and will continue to champion it.”
Corby warns, however, that in the legislative rush to impose age assurance rules on various sites, the industry must remember to prioritize user experience. “As more platforms come into scope, or simply begin to comply with existing laws, we must not let age checks become cookie-popups on speed, as the public will eventually push back,” he says. “So the onus is on the age verification industry to pre-empt that, and we are making excellent progress.”
National digital identity schemes loom over private sector
Enter the government, to say, “We’re here to help.” As nations move ahead on plans to implement national digital ID systems, and others (like India’s Aadhaar) become established as core infrastructure, there have arisen questions about whether government wallets could be vehicles for age assurance – and attendant fears among private firms that they could be rendered irrelevant.
The situation is particularly prickly in the UK, where companies certified under the Digital Identity and Attributes Trust Framework (DIATF) have been pleading their case to a revolving cast of policymakers. The UK digital ID debate took center stage in a recent episode of the Biometric Update Podcast, wherein the Association of Digital Verification Professionals (ADVP), which counts a number of age assurance providers among its members, argued in favor of DIATF-certified firms, while the Tony Blair Institute made the case for government-led digital identity.
Some question whether the government should shoulder the cost of keeping kids off porn sites and social media platforms. Others fear Big Brother tracking their secret online habits. On the other hand, when it comes to data privacy, trust in digital platforms is not much better, and suspicion lingers, no matter how clearly the industry explains the French double-blind model or the development of global standards to govern age assurance systems.
Sharing members makes for complex relationship
The battlefield would thus seem to host three armies: the government, the private digital age assurance sector and the masses of average internet users who just want to doomscroll or look at some skin, but also agree that it probably shouldn’t be so easy for six-year-olds to learn about the versatility of ball gags. However, the ample overlap – between methods and members and frameworks and goals – defies clean boundaries.
One thing that seems clear is that people do not enjoy having to take additional steps to get to the content they want – colloquially, “friction.” The number of people that enjoy having to click on a cookie popup is zero. Constant age checks to log onto social media or Pornhub are effectively guaranteed to kill anyone’s mood, choke traffic, and drive users to alternative platforms.
How the world gets to trusted, reusable proof of age is still being decided. But the ground is beginning to stabilize: this month saw the publication of ISO/IEC 27566-1:2025 – Age assurance systems, the first global standard covering age assurance technology. By the end of 2026, the landscape will look different, in that we will begin to see which parts of the age assurance ecosystem are likely to endure, and which are past their use-by date.
One thing is certain: no one intends to stop talking about how to save the children.
“Protecting children is a societal issue that will continue to gain momentum in 2026,” says Needemand’s Polit. “Age assurance will become a reality in an increasing number of countries and use cases. On the other hand, the media coverage of the frequent personal data breaches involving major platforms will increasingly steer web users toward age assurance solutions that do not rely on personal data.” His company is preparing to launch another product in 2026 that offers an alternative to BorderAge’s hand gesture system.
Other solutions promise further innovation. Which is to say, reusable, interoperable age checks are among the age assurance industry’s first major milestones. There are many more to come.
AI, fraud and market timing drive biometrics consolidation in 2025 … and maybe 2026

Biometric Update reported on nearly 50 acquisitions in total during calendar 2025, about 10 more than in 2024, which had a handful more than 2023.
There was “a significant amount of consolidation” among digital identity technology suppliers in the assessment of Goode Intelligence CEO and Chief Analyst Alan Goode. He told Biometric Update in an interview that technology providers in current identity verification stacks are under pressure to remake their offerings for the inflection point that verifiable credentials are about to create, in one of several market conditions encouraging deal-making.
The crop of consolidations deals ranged from Incode’s acquisition of AuthenticID, largely to add document verification capabilities to its digital identity platform, to DNP adding Laxton’s international market reach and experience to its identity management core and AI parking lot operator Metropolis bringing Oosto’s facial recognition in-house.
Liminal CIO Filip Verley tells Biometric Update he sees the market as likely to remain active in the first half of 2026, due to the same forces that shaped the past year.
Fraud has overwhelmed organizations of all kinds, and Verley emphasizes the degree to which this has pulled enterprise teams and market players in adjacent areas together.
AI has contributed to this wave of fraud in several important ways. The barrier to entry has been lowered, and forgeries are now scalable in a way cybercriminals could only have dreamed of just a few years ago. The proliferation of generative AI tools has also changed the state of the art in biometric liveness detection, with injection attack detection (IAD) now table stakes for secure remote user onboarding the way presentation attack detection (PAD) has been for the last several years.
And even then, more signals are needed to have a significant impact on mountainous fraud rates.
These intertwined developments are occurring right at the same time as many market players have reached natural decision-making junctures.
Interest rates are up, identity technology providers have tested their products in the new AI threat environment and investors are increasingly looking to cash out.
Goode’s analysis of industry consolidation at the end of last year highlighted the role of digital identity and biometrics in digital transformation, such as in the aviation industry.
IN Groupe’s acquisition of Idemia Smart Identity had already been announced, as had LexisNexis Risk Solutions’ deal for IDVerse. But 2025 had plenty of consolidation and acquisition activity in store for biometrics and digital ID.
Cycles align
Corporate strategy tends to work in two to three year cycles, Goode points out.
During that process, biometrics and digital identity providers that enter the M&A market do so based on “what the product is going to be like in three years time,” he says. “It’s quite a significant amount of investment time, considering the whole process of understanding what that product is, then who the match is, and the kind of companies that can boost or add to that particular product, with an eye on what they believe the future will entail.”
The funding boom that accompanied the historically low interest rates and unique market pressures of the COVID pandemic allowed businesses to pursue the capabilities they would need to fight for market share, whether by investing in research and development to build it or buying it. Each did so based on its answer to: “What does digital identity look like in three years time?”
The timing lines up with the expectations of investors who raced into the space during the pandemic, Verley notes.
“Guess what? Most investors are looking for a return in three, four, five years.”
A lot of money was invested during the low-interest period, when rapid digitization created growth curves shaped like hockey sticks. But the curve has plateaued somewhat, Verley says, putting pressure on the massive valuations of the pandemic era.
Those anxious for a return on their investments may be especially motivated to act if the tech market sours in other areas, as well, Goode warns.
At the same time, he says, “the move from documents and card to digital assets and digital credentials is shaking this market up.”
At the beginning of the identity lifecycle, the traditional security printers are looking at ways to extend their role in the ecosystem, Goode says. They have been looking at acquisition as a way to build capabilities that, even if they are “not necessarily full stack,” provide an element to “get them over the line for an individual tender.”
That shift is also seen in areas like travel where digital and physical credentials are coming together as the aviation industry looks for ways to automate processes.
Overwhelming fraud
Fraud was “the number one topic in 2025” for businesses and public sector organizations, Verley says, driving a convergence within them. “Cybersecurity is now very interested in identity, identity is very interested in fraud, fraud is interested in identity and cybersecurity,” he says.
In some cases, each of these teams has already purchased solutions, so the ability to coordinate signals between them is increasingly important.
Verley noticed at a recent ACAMS (Association of Certified Anti-Money Laundering Specialists) event that every compliance company also billed itself as a fraud prevention provider.
“That is the number one pain point for buyers” across industries, according to Verley.
AI has made fakes too good and too easy to make, whether for biometrics or ID document spoofs. This is buoying injection attack detection (IAD), Goode notes, but relying parties are still having trouble on both the fraud prevention and customer friction sides of the ledger, Verley says.
Goode noted in last year’s wrap-up that the process of migrating to digital identity documents is well underway, and this progress is reflected in the timing of Signicat’s acquisition of Inverid, in one example.
Reducing fraud is part of the motivation behind the EU Digital Identity Wallet, which launches in the year ahead. By tying digital IDs to government-issued biometric documents with electronic chips.
“That’s going to mean a huge uptick in onboarding people to issue them these new credentials that are going to be big in identity verification, and that’s going to be the best way to do that,” Goode says.
At the same time, businesses that had no choice but to pay for identity services during pandemic now have more choice, Verley says. So providers are emphasizing fraud protection to justify the value of their products.
Several of the year’s other digital identity acquisitions could also be characterized as providers with traction adding fraud-prevention capabilities.
“We’re seeing that consolidation where vendors, the ones that are strong, are now picking up other vendors, or consolidating these vendors,” Verley says. “Instead of building, they’re buying. And they’re buying technology that can solve that next use case that it right next to” the one they’ve already solved.
Ping Identity’s acquisition of Keyless is an example of this kind of transaction, adding attractive privacy protection capabilities to a successful platform.
AI for everything
This is a much more compelling pitch, he points out, in the age of AI.
The threat landscape has changed just as AI agents join the ecosystem, and practically every business attempts to find ways to put them to work.
From 2023 to last quarter, use of AI in press releases and fundraising materials accelerated from 152 mentions in Q4 2023 to 2,571 in Q3 2025, according to Liminal stats.
With the ease of AI-powered fakes and the volume of real data available to criminals, IAD is becoming table stakes, and Goode points to it as another potential area for consolidation, as identity verification, deepfake detection and IAD “become more tightly integrated.
Businesses want to combines these capabilities with systems that can pull together and interpret data from additional sources, Verley suggests, putting further pressure on market players to consolidate their strengths.
“You can’t just rely on your document scanning or your selfie,” he says. “You’ve got to figure out what are some of these probabilistic signals around it, behavioral intelligence, device intelligence, all of these signals that build a complete picture.”
Uncertainty is a central feature of the AI market landscape, and Goode notes the possibility that if predictions of the AI market popping like a bubble in 2026 come true, restricted credit availability “could put a damper on acquisitions.”
But one use case for AI, a deluge of market reports show, is here to stay.
“More than 90 percent of buyers are aware of gen AI, deepfakes and synthetic fraud, only about 20 percent feel ready to tackle it,” Verley says. “It is the biggest awareness to preparedness gap we’ve ever measured over the five years we’ve been running the surveys. You’re not alone knowing it’s a problem, and you’re not alone not knowing how to fully solve it.”
In some market areas, like border control, “the majority of the market is still document based, and will remain so for a significant amount of time,” Goode says, which means the transition to digital credentials will continue to heavily influence market consolidation for the next couple of years.
“We’re seeing a lot of merger and acquisition deals across cyber and fraud that are happening in this space,” Verley says. “And I’m assuming we’re going to see a ton more, maybe in Q1, Q2; I think we’re going to see a lot of activity.”
New Virginia law tests a time limit approach to teen social media use

Virginia will begin enforcing a new social media law on January 1 that, by default, will limit children under 16 to one hour per day on major social media platforms unless a parent gives permission for more time.
The new law will make Virginia one of the first states to directly regulate how long young users can spend on social media rather than just how their data is handled.
The measure, added to the state’s consumer privacy statute, requires platforms to determine a user’s age using reasonable methods and to apply the time cap automatically for minors.
At the center of the law is a simple threshold and a simple rule. It defines a “minor” as anyone younger than 16, then requires any “controller or processor that operates a social media platform” to do two things.
First, it must use “commercially reasonable methods” such as a “neutral age screen mechanism” to determine whether a user is a minor; and second, it must limit a minor’s use of the platform to one hour per day per service or application unless a parent affirmatively changes that limit through verifiable parental consent.
The definition of “social media platform” matters because it determines what services fall inside the regime.
Under the amended Virginia Consumer Data Protection Act (VCDPA) definitions effective January 1, a “social media platform” is a public or semipublic Internet-based service or application with users in Virginia that connects users to interact socially and lets users build a public/semi-public profile, maintain a list of social connections, and post content viewable by other users, including via boards, chat rooms, or a main feed.
The statute also narrows its reach by excluding several types of services that lawmakers appear to see as adjacent to but not the primary targets of the law.
Platforms that exclusively provide email or direct messaging are not covered, and services that are mainly focused on news, sports, entertainment, or ecommerce are also excluded when user interaction such as comments or chat is only incidental to the core service.
Interactive gaming platforms are similarly carved out, so long as social features are not their primary function.
Practically, that definition is designed to capture the mainstream, user-generated social platforms most parents would recognize, while reducing the odds that the rule automatically applies to every website with a comment section or a chat feature.
It’s at the edges where implementation gets complicated. A service that is “primarily” something else but has a robust user community and a significant user-generated feed can end up in a gray area.
The statute’s criteria – profile creation, social connection lists, and a user-generated feed – function as a checklist, yet real world platforms often blend these features in ways that make “primarily” and “incidental” hard to apply without litigation or regulatory guidance.
The law’s one-hour limit is not framed as a ban. Instead, it is a default throttle under which minors can use the service for up to an hour per day, and a parent can raise or lower the cap if the platform provides a mechanism for verifiable parental consent.
Importantly, the statute also says that granting parental consent for time-limit adjustments does not require the platform to provide parents “any additional or special access to or control over” the minor’s account or data.
In other words, Virginia is not mandating a broader parental monitoring dashboard; it is mandating a time gate that parents can modify.
Virginia also included a data use limitation aimed at the predictable privacy backlash that age-gating laws trigger. Any information collected to determine age “shall not be used for any purpose other than age determination and provision of age-appropriate experiences.”
There is also a notable design twist. If a user’s device communicates or signals that the user is, or should be treated as a minor, say through a browser plugin or a privacy or device setting, the platform must treat that user as a minor.
That provision attempts to let device-level signals do some of the work, potentially reducing how often platforms need more invasive checks. But it also creates incentives for platforms to honor new kinds of “age flag” signals if operating systems and browsers standardize them.
Another provision anticipates platforms trying to pressure users into “consenting” to more tracking or paid upgrades to escape the cap. The statute says a platform may not withhold, degrade, lower the quality of, or increase the price of an online service, product, or feature because it is not permitted to provide use of the social media platform beyond the one-hour daily limit.
At the same time, Virginia wrote a pair of caveats that give platforms room to maneuver, as the law does not require a platform to provide a feature that requires the personal information of a known minor, and it does not prevent different pricing or service levels to a known minor if reasonably related to exercising rights or complying with VCDPA obligations.
This serves as an anti-retaliation rule with built-in flexibility that companies will likely point to when defending product changes.
Because this measure lives inside the VCDPA, enforcement follows the VCDPA’s structure. The law applies to covered entities that do business in Virginia (or target Virginia residents) and that meet specified processing thresholds. For example, controlling or processing the data of 100,000 consumers a year, or 25,000 consumers with more than 50 percent of revenue from selling personal data.
Enforcement is exclusive to the Virginia Attorney General. There is no private right of action. The attorney general must generally provide a 30-day notice and opportunity to cure before bringing an action.
If violations continue after the cure period – or if the company breaches a written assurance that it has cured – the attorney general may seek injunctive relief and civil penalties up to $7,500 per violation plus expenses and attorney fees.
Those mechanics matter because they shape what “January 1” really means on the ground. Even if a platform is noncompliant on day one, VCDPA procedure can delay actual enforcement, depending on when and how the attorney general’s office issues notices and evaluates cures.
That dynamic is already central to the most significant uncertainty around the law: whether it will be allowed to operate at all in early 2026. And that’s because the statute is already under constitutional attack.
In November, tech industry trade group NetChoice sued Virginia’s Attorney General, arguing the law violates the First Amendment by restricting access to lawful speech and imposes burdensome age/consent verification that could create privacy and security risks.
“The First Amendment forbids government from imposing time-limits on access to lawful speech,” said Paul Taske, Co-Director of the NetChoice Litigation Center. “Virginia’s government cannot force you to read a book in one-hour chunks, and it cannot force you to watch a movie or documentary in state-preferred increments. That does not change when the speech in question happens online.”
Virginia public radio framed the dispute as a clash between lawmakers who describe the bill as parent empowerment, and opponents who argue it broadly burdens “all content on social media and burdens everyone’s access to that content.”
In its preliminary injunction filing, NetChoice emphasized the VCDPA notice-and-cure structure and argued that the one-hour restriction regulates a sweeping amount of protected online activity.
Supporters, including the bill’s patrons and allies, have tended to defend the measure as content neutral and focused on youth well-being while emphasizing that parents can opt for more time via verifiable consent.
Opponents say that even a content neutral time cap can still be a speech burden when it deliberately constrains how minors access lawful expression and information, and they argue that forcing platforms to gate speech behind age checks raises its own privacy concerns.
In practice, the most immediate question for Virginia families and platforms is how platforms will decide a user is under 16 with “commercially reasonable” methods while also honoring the statute’s limit that age-determination data cannot be repurposed.
The law does not mandate a single technical method. It instead gestures toward neutral age screens and device-level “signals” that a user should be treated as a minor. That flexibility is intentional. Virginia is setting an outcome rather than prescribing a specific verification stack.
But, it also means the implementation could vary sharply across services, with some relying on self-attestation plus device signals, while others push for more robust parental verification workflows. And still others may tighten defaults in Virginia-only ways that are hard to reconcile with national product design.
2025 saw the quiet consolidation of America’s biometric border

By the end of 2025, it was no longer credible to describe the U.S. government’s use of biometrics in immigration enforcement as fragmented, experimental, or limited to border checkpoints.
Over the course of the year, a steady accumulation of procurement records, privacy filings, rulemakings, and operational disclosures revealed something far more durable. It unmasked a layered, interoperable surveillance architecture in which identity itself has become a persistent enforcement surface.
What distinguished 2025 was not a single explosive revelation, but the way previously discrete systems, often discussed in isolation, began to resolve into a coherent whole.
Facial recognition databases, mobile biometric collection tools, and backend case-management platforms were not merely expanding in parallel; they were converging.
And together, they showed how the federal government has been methodically pushing biometric enforcement outward in space and forward in time, embedding identity surveillance into routine administrative processes while oversight mechanisms lagged behind.
No technology better captured this shift than Clearview AI. Once treated as a scandal-driven outlier – a private company scraping billions of images from the open Internet – Clearview’s true significance in 2025 lay in how unremarkable its underlying model had become.
The controversy surrounding Clearview no longer centered on whether law enforcement should use facial recognition at all, but on which vendor or system would supply it.
The premise that a person’s face could be captured anywhere, matched against vast image repositories, and used to generate investigative leads without notice or consent had largely been accepted.
Even where Clearview itself was absent, its logic persisted. Federal and state systems increasingly mirrored the same assumptions: large-scale image aggregation, probabilistic matching, opaque accuracy metrics, and limited avenues for challenge once a match had been made.
What Clearview normalized was not simply facial recognition, but a governing idea that identity could be inferred and acted upon without any prior relationship between the individual and the state.
By the end of the year, the Clearview story had ceased to be about a single company and instead marked the maturation of mass facial recognition as infrastructure rather than exception.
If Clearview illustrated normalization at the database level, Mobile Fortify showed how that normalization reaches the street.
Throughout 2025, Immigration and Customs Enforcement (ICE) quietly expanded its use of Customs and Border Protection’s Mobile Fortify application under an oversight framework that barely registered the scope of what was being authorized.
A joint ICE–CBP privacy threshold analysis did not dispute that agents were capturing facial images, fingerprints, and associated metadata in the field. Instead, it argued that existing Department of Homeland Security (DHS) privacy documentation elsewhere in the bureaucracy was sufficient to cover the practice.
That procedural move proved decisive. By framing Mobile Fortify as an extension of existing systems rather than a new capability, DHS avoided the triggers that would normally require a full Privacy Impact Assessment or a public System of Records Notice. In doing so, the department effectively treated real-time biometric collection on personal mobile devices as an incremental change, not a qualitative shift.
Operationally, Mobile Fortify collapsed the distance between encounter and database. Identity capture, biometric matching, and enforcement decision-making could now occur almost instantaneously, often during brief, unplanned interactions where individuals had little understanding of what data was being taken or how it would be used.
The significance of Mobile Fortify was not merely that it enabled field biometrics, but that it demonstrated how mobility itself has become a regulatory blind spot. Oversight regimes built around static systems and centralized processing struggled to respond to tools designed to move faster than the paperwork meant to govern them.
Less visible, but no less consequential, was the growing role of ImmigrationOS. Marketed as a workflow and case management platform, ImmigrationOS initially appeared administrative rather than coercive.
Systems that determine how data flows, which alerts are generated, and how cases are prioritized often exert more influence over outcomes than the sensors that collect the data in the first place.
ImmigrationOS repeatedly surfaced as a connective hub linking biometric identifiers, enforcement priorities, location data, and third-party inputs. It does not need to collect fingerprints or facial images directly to shape enforcement decisions.
By structuring how biometric matches are surfaced and operationalized – who sees them, when, and with what recommended action – it effectively governs behavior at scale.
The critical insight from this is enforcement logic is migrating upstream into software architecture. Decisions once left to supervisory judgment are increasingly encoded into dashboards, queues, and automated workflows that few outside the system ever see.
Together, these technologies revealed the emergence of an integrated immigration biometrics stack.
Biometric enrollment now begins earlier, persists longer, and travels further than at any point in the past. Data collected during visa applications, asylum processing, airport screening, or street encounters can reappear years later in unrelated enforcement contexts.
International data-sharing arrangements extend this reach beyond U.S. borders, embedding American biometric systems within foreign law enforcement operations while largely escaping domestic transparency requirements.
What stood out in 2025 was how rarely these expansions were debated as expansions. Each step was justified as modernization or efficiency. Taken together, they amounted to a redefinition of immigration enforcement itself, one in which biometric identity becomes a permanent condition rather than a situational tool.
In such a system, error and bias are no longer isolated risks. A single flawed match can propagate across agencies and time, magnifying consequences while diffusing accountability. The through line of the year though was not technological inevitability, but governance lag.
Oversight mechanisms remained document-driven and siloed even as systems became integrated and real time.
Privacy reviews focused on whether a system existed, not on how it reshaped power relationships.
Courts encountered biometric evidence downstream, long after collection and matching decisions had already constrained outcomes.
And Congress received briefings framed in the language of modernization rather than structural transformation.
By the close of 2025, the cumulative effect was unmistakable. The biometric surveillance state did not arrive through a single law or a single database.
It emerged through accretion, through tools framed as administrative, through mobile apps authorized by procedural shortcuts, and through backend systems that quietly encoded enforcement priorities into software.
The unresolved question left by the year’s record is not whether this architecture exists, but whether democratic institutions will meaningfully confront it before it hardens beyond recall.
At stake is not simply privacy, but the ability to govern identity itself. Once biometric systems are fully integrated across borders, agencies, and time, they become extraordinarily difficult to unwind.
The work of 2025 made clear that the window for public reckoning is narrowing. Whether it closes quietly or under scrutiny remains an open question, but the architecture is already in place.
Why identity data lakes will power the next decade of security
Author: Malhar Vora, Principal Security Engineer and People & Engineering Leader – Group Cyber Security at ANZ Bank https://medium.com/@malhar.vora We all talk about identity as “the new perimeter,” but that statement misses a critical truth. You can’t secure identity if you don’t understand the data behind it. Today, identity isn’t just a login or a […]
The post Why identity data lakes will power the next decade of security first appeared on Identity Week.
Washington’s online safety push collides with big tech and fracturing Congress

The sprint to reassert control over children’s online safety entered a volatile new phase this week. Lawmakers advanced a sweeping package of bills while tech executives came to town to shape the details and regulators prepared to intervene in the growing fight over age verification.
In a single morning, the House Subcommittee on Commerce, Manufacturing, and Trade voted to advance eighteen online safety and children’s privacy bills – including controversial age verification and app store mandates – while Apple CEO Tim Cook met privately with lawmakers to warn that parts of the legislative package could erode user privacy and force platforms to collect sensitive identity data on millions of Americans.
These developments converged with a string of revelations that included court filings accusing Meta of enabling sex trafficking and suppressing research on teen mental-health harms, the Federal Trade Commission (FTC) warning that softening encryption to satisfy foreign governments may violate U.S. law, and the unraveling of the bipartisan alignment that once anchored the Kids Online Safety Act (KOSA) and the Children’s Online Privacy Protection Act 2.0 (COPPA).
With the Senate insisting on a strong duty-of-care standard and the House pushing to weaken it, Congress is heading toward a decisive showdown that will determine whether the United States builds real protections or accepts a fractured mix of state measures and voluntary industry pledges.
On December 8, the FTC formally announced it will hold an age verification technologies workshop on January 28, 2026, at its Constitution Center headquarters in Washington, D.C., with both in-person and online participation.
The event will bring together researchers, academics, industry representatives, consumer advocates, and government regulators to examine why age verification matters, what age verification and estimation tools exist, how companies can navigate the regulatory landscape, how to deploy these systems at scale, and how they intersect with the COPPA rule.
On its face, the workshop is a technical convening, but it lands in a year when age-assurance technologies ranging from device-based checks and document verification to facial age estimation and behavioral profiling are being hard-wired into online safety laws in Europe, the UK, and a growing number of U.S. states.
Analysts say biometric and age assurance vendors are likely to treat the FTC event as an early waypoint in shaping U.S. norms for how far platforms can go in scanning faces, IDs, and behavioral patterns to guess who is a child.
The workshop also arrives as FTC under Chair Andrew Ferguson has signaled a willingness to act as a counterweight to European and British regulators on both encryption and speech.
In August, Ferguson sent a detailed letter to major tech companies warning that weakening encryption or other security protections to comply with foreign regimes – including the EU’s Digital Services Act, the UK Online Safety Act, and demands under the UK Investigatory Powers Act – could amount to deceptive or unfair practices under Section 5 of the FTC Act.
Ferguson argued that companies promising secure or encrypted services cannot quietly downgrade protections because foreign governments want easier access to user data, and that restricting or removing content for Americans to appease foreign regulators could also be an unfair or deceptive act if not clearly disclosed.
This is a striking posture. While European and British regulators are leaning on platforms to detect harmful content, including material that targets children, the FTC is warning that bending too far to foreign demands may itself violate American law.
That tension is playing out in real time. When the UK’s Information Commissioner’s Office informed MediaLab’s image sharing site Imgur that it faced a likely enforcement action under the Children’s Code, the company withdrew from the UK market entirely as of September 30, cutting off access for British users.
Regulators responded that exiting the country would not shield Imgur from potential penalties for past violations.
Inside Congress, the stakes are no less fraught. Thursday, the House Subcommittee on Commerce, Manufacturing, and Trade advanced eighteen bills aimed at protecting children and teens online, forwarding them to the full House Energy and Commerce Committee by voice vote.
The package includes measures addressing social media harms, fentanyl sales, AI-driven grooming risks, algorithmic transparency, app store age verification, bot disclosures, and children’s data sales. It also contains updated House versions of KOSA and COPPA 2.0.
The markup represents a rapid acceleration, as just a week earlier subcommittee chair Gus Bilirakis had suggested markup would slip into the new year.
But moments after the markup, the limits of the coalition were on display. Apple CEO Tim Cook met with lawmakers – many of them the same members overseeing the markup – to lobby for changes to the App Store Accountability Act, a cornerstone of the House’s age-assurance approach.
Cook warned that requiring app stores to verify user ages could force Apple to collect highly sensitive personal data, including government IDs or birth records, on millions of Americans.
Apple’s global privacy team argued that such mandates could weaken user privacy and that a parent-driven model, where adults disclose ages rather than platforms ingesting identity documents, would reduce risk.
Some lawmakers found Cook’s warnings persuasive, while others questioned whether the company’s privacy concerns were compatible with the long-running failures of tech giants to protect minors.
The debate over the App Store Accountability Act goes to the heart of the emerging U.S. model for age assurance. Texas and Utah already require app stores to verify users’ ages and link minors to verified parents.
The House version would scale that model nationwide, requiring Apple, Google, and other app store operators to determine a user’s age category and convey an “age-range signal” to all installed apps.
Advocates say this relieves individual apps from collecting biometric or identity data. Critics warn that it centralizes enormous volumes of sensitive information in just two companies and risks creating a de-facto digital identity system.
The House’s internal divisions extend far beyond the app store debate. During the December 2 hearing on legislative solutions to protect children and teens online, key Democratic lawmakers accused committee leadership of gutting the House versions of KOSA and COPPA 2.0 to satisfy Big Tech.
The reworked House discussion draft of KOSA removed the bill’s central “duty-of-care” requirement, an enforceable standard obligating platform to mitigate foreseeable harms such as suicide content, sexual exploitation, eating disorder promotion, and predatory recommendation algorithms.
The new draft replaces that standard with an obligation to maintain “policies, practices, and procedures,” language critics describe as a compliance mirage that would allow platforms to simply document safety protocols rather than redesign harmful features.
The House draft also introduces sweeping federal preemption, barring states from enacting or enforcing any law that “relates to” the bill’s provisions. This could erase hard won state efforts to regulate social media design, data minimization, and age-appropriate systems.
Groups that once championed KOSA – from the National Center on Sexual Exploitation to ParentSOS – now warn that the House bill, with aggressive preemption and no duty-of-care, could be “worse than doing nothing.”
COPPA 2.0 is undergoing a similar ideological rewrite. While the Senate version would significantly expand children’s privacy rights and ban targeted advertising for users under 17, the House draft leans heavily on an “actual knowledge” standard that critics say invites platforms to turn a blind eye to underage users to avoid compliance.
In contrast, the Senate continues to solidify around a stronger posture. In May, Senators Marsha Blackburn and Richard Blumenthal reintroduced KOSA with bipartisan backing from Majority Leader John Thune and Minority Leader Chuck Schumer.
As of early December, the Senate version had 69 co-sponsors, enough to overcome a veto, and Blackburn has insisted that the duty-of-care standard is non-negotiable.
Newly unsealed filings in a massive multidistrict litigation against Meta strengthened her case. The plaintiffs in the suit allege Meta buried internal research linking Instagram and Facebook to worsening depression, anxiety, eating disorders, and suicidal ideation among teens; maintained an effective “17-strike” policy for sex-trafficking accounts; and allowed algorithms to connect minors to predators through follow-suggestion tools.
A separate Reuters investigation revealed that Meta’s AI chatbots engaged in romantic or sexual role-play with minors until policy changes were implemented.
Blackburn has repeatedly invoked these findings to argue that voluntary reforms are no longer credible. In a December 3 floor speech and a TIME op-ed, she warned that the House draft – with no duty-of-care, a narrow knowledge standard, and broad preemption – would protect Big Tech at the expense of children.
Across all these debates, age verification has become the hinge issue. Its promise is also its threat.
Lawmakers see age assurance as a powerful tool for reducing harms, but civil rights groups warn that poorly designed mandates could normalize biometric surveillance, expose sensitive data, chill anonymous speech, and create sprawling databases irresistible to hackers, law enforcement, and commercial brokers.
LGBTQ+ advocates worry that vague “harmful content” provisions, enforced by empowered state attorneys general, could be weaponized to restrict access to mental health and reproductive health resources, especially once paired with invasive age checks.
That is the unresolved problem standing before Congress, the FTC, and the courts. Age assurance is increasingly unavoidable in any comprehensive children’s safety framework. But how to build it, without creating a national digital identity apparatus, remains the defining challenge.
What is clear is that 2026 will not be a quiet year. With the House advancing a narrowed and heavily preemptive KOSA and COPPA 2.0, the Senate holding the line on duty of care, and Big Tech executives lobbying to shape the details, a bruising conference committee clash is all but guaranteed.
And if Congress deadlocks, the burden will shift to the FTC, state attorneys general, and judges – institutions already struggling to navigate the collision of children’s safety, free expression, privacy, surveillance, and platform power.




























