Category: Biometrics
ID.me expands reach of digital identity wallet for healthcare management

U.S. patient access and healthcare records management provider Flexpa has signed on to use ID.me’s digital identity wallet to give Americans an interoperable way to securely and easily access and share their medical records.
Patients perform identity verification with ID.me to receive their trusted credential, which they can then use to sign in wherever ID.me is accepted. The company says its reusable, privacy-preserving credential makes access easier and raises pass rates for legitimate users while reducing process abandonment and fraud.
Flexpa’s platform connects to any patient access network through the TEFCA framework, according to the announcement. The company provides patient consent management and credentials including smart cards and QR codes to streamline healthcare workflows. Data provided by Flexpa is formatted according to Fast Healthcare Interoperability Resources (FHIR) standard.
“Trust is the foundation of interoperability,” says Blake Hall, founder and CEO of ID.me. “Patients, providers, practice administrators, and payers all need confidence that health data is being accessed by the right person. By pairing ID.me’s digital identity network with Flexpa’s APIs, we’re creating a secure and seamless way for patients to take control of their records while protecting everyone against the surge of AI-driven fraud.”
For ID.me, the deal continues a steady push into the healthcare sector that also includes participation in an interoperability initiative from the U.S. Centers for Medicare and Medicaid Services (CMS). The partners say they will align their joint solution with the Interoperability Framework CMS is expected to launch later this year.
There are already 154 million people with the ID.me digital identity wallet, and 78 million of them are verified to the NIST AAL2/IAL2 standards used by the federal government for secure user authentication.
Some of those users joined through the Department of Veterans Affairs’ adoption of ID.me (or Login.gov) last year for access to benefits and healthcare services. ID.me also powers identity verification and consent processes for OtisHealth, another app for access to medical records, in a 2024 deal that marked the first connection between a digital credential service provider and the TEFCA framework.
Canada’s age assurance debate heats up around Bill S-209

Age assurance for pornographic content online is likely coming to Canada – and the debate over it has already arrived. Bill S-209, “an Act to restrict young persons’ online access to pornographic material,” is the sequel to S-210, a similar bill that passed the Senate in 2023 but got mired in bureaucracy until the government reset, sending the process back to the drawing board. The reanimated bill is causing significant consternation among Canadian privacy advocates, but winning support from the country’s privacy watchdog.
In comments to the Standing Senate Committee on Legal and Constitutional Affairs (LCJC), Privacy Commissioner of Canada Philippe Dufresne has endorsed Bill S-209, and declared that “it is possible to implement age-assurance mechanisms in a privacy-protective manner.”
The declaration, which echoes the findings of the Age Assurance Technology Trial commissioned by the government of Australia, follows an exploratory public consultation on age assurance by the Office of the Privacy Commissioner (OPC), which solicited input from privacy and industry stakeholder groups, civil society, academia, technology policy think tanks, and more. Dufresne says the responses reflect the “significant public interest in, and importance of, a well-considered approach to age assurance.”
Now, his office is developing guidance on implementation, demonstrating confidence in the ultimate success of Bill S-209. Dufresne had previously expressed concerns about data retention and misuse, but says those have been addressed in the latest version, noting that “the added requirement to limit the collection of personal information to that which is strictly necessary for the age verification or age estimation” has “enhanced the Bill from a privacy perspective.”
The OPC’s general stance on age assurance amounts to cautious optimism. Dufresne concedes that “determining the precise scope of the bill in its actual operation is a delicate task,” but says his office is ready to help – and needs to be involved.
“Should the Bill be adopted, it will be important for my Office to be involved in the review of regulation drafted by the Government and we will be ready to assist in any way that we can to ensure that privacy and the best interest of young persons are protected in the implementation stage of this Bill.”
New standards will help. The UK-based Age Check Certification Scheme (ACCS) recently announced it can now certify age verification and estimation technologies against Canada’s national standard for age assurance, CAN/DGSI 127: 2025, Age Verification – Age Assurance Technologies.
Legal association wants more specificity in legal definitions
The OPC’s stance is typically Canadian: polite, concerned, more or less willing to follow the rest of the democratic world on major policy issues. The same spirit is present in a letter from the Canadian Bar Association, which effectively says that Bill S-209 has noble intentions but needs better privacy protections.
“The Bill’s preamble claims that ‘online age-verification and age-estimation technology is increasingly sophisticated and can now effectively ascertain the age of users without breaching their privacy rights’,” says the latter. “The Bill, however, contains no specifics on how the government will practically balance privacy and protection. Instead, it mentions the development of ‘regulations for carrying out the purposes and provisions’ of the Bill.”
The CBA is seeking clarification on language, noting the significant differences between age verification and age estimation methods – even though they are still regularly conflated.
Because it requires an identity document, age verification offers high accuracy “but at the cost of deeply compromising user anonymity and raising substantial privacy risks through the collection and storage of sensitive personal information.” Being an estimate, age estimation is less precise, but also a “less intrusive method that generally preserves user anonymity and reduces privacy concerns.”
“Where will the distinction lie in the trade-off between definitive age confirmation and the protection of individual privacy and digital freedoms?”
The language, the lawyers say, must be clear and precise. As is, it “lacks key specifics: no defined retention timeline, no clarity on the speed of destruction, no auditing or enforcement mechanisms, no requirements for storage location, and no remedies for users if data is mishandled.”
There is irony in the CBA’s argument, in that it builds an argument against age assurance providers, while making the same recommendations many in the industry have made: hard regulatory guardrails. It is becoming clear that over-retention of data is a real problem; the Discord breach demonstrated that. The CBA asserts that “an obvious by-product of such age verification or age estimation measures is the creation of a data set that links personal identifying data to data revealing that an individual accessed internet pornography as well as the specific sexual proclivities and interests of that individual.”
While it is perhaps the frankest legal expression to date about the real reasons people don’t like age assurance – the possibility that someone might find out what kind of filth you’re watching and cancel you, Scarlet Letter-style – it is also built on the assumption that providers retain data that can trace them to specific content.
The truth is that most providers work to retain as little data as possible, for as little time as possible. Many would welcome data minimization principles being coded into law. Few would disagree with the following statement by the CBA: “If this data must be collected, the Bill should explicitly say so and be amended to include legislative provisions that ensure the personal data of Canadians won’t be improperly retained, accessed, or otherwise misused.”
Geist fears risk of overreach after senator cats doubt on scope
But as long as there is leeway in the language of the law, the risks remain – not only in terms of data retention, but also overreach. On his blog, frequent commentator on copyright and privacy issues Michael Geist raises the alarm over statements from Bill S-209’s sponsor, Senator Julie Miville-Dechêne, which indicate a broader scope for the law than pornography.
Miville-Dechêne says “Bill S-209 does not just target porn platforms like Pornhub” and that the full scope of application is in the hands of the government – “so the government could decide to include social media like X in its choices.”
Geist calls this “an explicit declaration from the bill’s sponsor that there are few limits on government power to require Internet sites to institute age verification requirements or face the prospect of mandated website blocking in Canada.” He floats the possibility of age verification being applied for search, or “AI services that can be used to generate pornographic images,” which include large language models like ChatGPT.
“This must surely spark reconsideration of what is a dangerous bill that could require virtually all Canadians to submit to age verification requirements in order to access commonly used search, social media, and other sites.”
Juniper forecasts $80B in revenue for digital identity globally by 2030

User verification and authentication with digital identity is increasingly required by global regulations and credentials like mobile driver’s licenses (mDLs) and digital travel IDs are providing a market tailwind, according to Juniper Research. The latest report from the firm on the “Digital Identity Market 2025-2030” forecasts revenues of $80.5 billion by the end of the decade.
Digital identity is a $51.5 billion market this year, according to the report, and expected to grow at an impressive 56 percent compound annual growth rate.
Issuing digital IDs alongside physical credentials in a hybridized approach will help encourage their adoption over the forecast period, Juniper says.
The ability of governments to make digital identities accessible and explain how they work will be important to their acceptance and ultimate adoption by the public, Juniper argues.
These pull factors are a major part of “How Digital Identity is Going Mainstream,” as explained in a whitepaper accompanying the market forecast. The 10-page whitepaper explores eIDAS 2.0, single sign-on (SSO) access control, zero trust and self-sovereign identity (SSI) as major trends impacting the market.
“The EU Digital Identity Wallet (EUDI), which all Member States are expected to have in place by the end of 2026, will have a transformative effect on identity in the region; however, digital identity is already socially acceptable in mainland Europe,” says Juniper Research Analyst Louis Atkin. “Adoption of the proposed UK scheme will require significant user benefits to overcome public scepticism. Focusing on self-sovereign principles, which give citizens control of their own data, will go a long way in improving support.”
The European Commission published a slate of new grant opportunities for EUDI Wallets and mDLs this month, while no-one seems quite sure what principles the UK’s digital ID will follow.
Parsons brings real-time biometric vetting to US Navy ships in foreign ports

Parsons Corporation has developed a new generation of biometric technology designed to enhance the security of U.S. Navy operations abroad by providing instant verification of foreign contract personnel.
The platform, called Biometrically Enhanced Access Control (BEACH), enables real-time identity authentication of individuals supporting U.S. vessels in overseas ports, ensuring that only vetted and authorized personnel can access restricted areas near ships and naval installations.
The goal is to replace vulnerable manual security checks with a unified, automated vetting process that ties a person’s live biometric data directly to an authoritative access roster.
The result is an unprecedented level of assurance and accountability at the point of entry, which is crucial for a Navy that increasingly relies on local contractors, vendors, and logistical partners in foreign environments where threats can emerge in unexpected ways.
BEACH was developed in close partnership with the Navy’s Program Executive Office for Unmanned and Small Combatants and the Naval Criminal Investigative Service (NCIS). Together, these organizations sought a solution that could deliver end-to-end personnel vetting within seconds rather than hours or days.
The platform was designed as a customized, scalable framework that supports personnel accountability and dynamic force-protection requirements.
According to Parsons, BEACH’s architecture allows it to be expanded across NCIS and the broader Navy enterprise, providing a common operating picture for who is accessing ships, when, and under what authorization.
Traditionally, the Navy relied on paper rosters and visual ID checks to screen foreign contractors known as “husbanding vendors” who provide essential services such as refueling, waste disposal, and logistics support during port calls.
These manual methods were not only slow, but they also were susceptible to forgery, manipulation, and insider exploitation.
BEACH closes that gap by merging multi-modal biometric verification with a live access-control database that cross-checks everyone against a pre-approved roster.
Parsons asserts that BEACH also extends vetting to law enforcement and intelligence databases maintained by NCIS and the Federal Bureau of Investigation (FBI).
When a contractor arrives at a pier or dockyard to deliver supplies or perform maintenance, their identity is first verified through Parsons’ Javelin or Javelin+ handheld biometric kits designed for austere field conditions.
These mobile devices capture a person’s fingerprints and facial image in real time and transmit them via the company’s Ares software gateway which manages the data flow, matching, and validation process.
Whether every capture is “real time” under all conditions depends on connectivity, match speed, and network infrastructure, which is not fully documented in public sources.
Ares acts as the middleware between Javelin and the Navy’s identity databases, allowing BEACH to authenticate the person against the ship’s current access list and conduct background vetting through NCIS and FBI systems.
Parsons says the system can deliver access decisions in seconds. If the biometric data does not match the credential being presented, or if a security flag appears in any linked system, the individual is denied access.
In a 2018 operational demonstration supported by the Navy, NCIS, and the Marine Corps Systems Command, BEACH successfully detected a falsified credential by identifying that the individual’s live fingerprint and facial data did not correspond with the approved identity record.
The test, conducted in coordination with the USS Theodore Roosevelt’s security forces, proved that the system could uncover sophisticated attempts to exploit weaknesses in traditional access-control methods.
BEACH is not a standalone experiment but rather it is part of a larger Parsons biometric ecosystem that includes tools already integrated into Department of Defense (DOD) and Department of Homeland Security operations.
Parsons’ Ares Gateway, certified for FBI Electronic Biometric Transmission Specification standards, is used in other DOD biometric programs and has been hosted in Amazon Web Services (AWS) GovCloud IL5 for the Army’s Next Generation Biometric Collection Capability, where it routes biometric submissions to DOD’s Automated Biometric Identification System.
Parsons has not publicly confirmed whether BEACH employs the same Ares Gateway architecture or specified its hosting environment.
AWS GovCloud IL5 is an authorization level that allows DOD organizations to handle Controlled Unclassified Information and unclassified National Security Systems requiring protections beyond the IL4 baseline.
While Parsons has not detailed BEACH’s infrastructure, the system’s described capabilities and alignment with Parsons’ biometric ecosystem suggest it likely adheres to comparable federal security and identity-management standards.
Similarly, while data-retention policies and privacy safeguards for BEACH have not been publicly disclosed, systems operating within this framework are expected to implement rigorous encryption, storage, and access-control measures consistent with DOD and federal information-security requirements.
DOD Directive 8521.01 serves as the Pentagon’s overarching policy for biometric identification and management, and DOD Regulation 5400.11-R establishes privacy and civil-liberties requirements. It is reasonable to assume that a system handling biometric data for the Navy and NCIS would also operate in accordance with these policies as a matter of standard practice. Neither Parsons nor DOD have confirmed this, however.
This recordkeeping improves situational awareness and helps trace incidents, verify compliance, and flag suspicious activity across multiple port calls.
The Navy views BEACH as more than just a technical upgrade. It represents a broader shift toward data-driven security architecture across its expeditionary and littoral missions.
By establishing a scalable and interoperable identity-vetting framework, BEACH sets the stage for wider application across ports, bases, and coalition environments.
BEACH’s deployment also reflects a growing recognition across DOD that traditional identity verification methods are no longer sufficient in modern operational theaters.
Similar biometric initiatives have been fielded by the U.S. Coast Guard for maritime interdictions and by the Army’s Next Generation Biometric Collection Capability for battlefield identification.
Parsons has characterized BEACH as more than a single security upgrade, describing it as a model for how the military can field complex technologies rapidly without sacrificing rigor.
The program’s accelerated development -transitioning from initial concept to operational capability in months – illustrates how agile engineering can keep force-protection systems aligned with the pace of modern threats.
Viral ‘Cheater Buster’ Sites Use Facial Recognition to Reveal Tinder Profiles
Videos demoing one of the sites have repeatedly gone viral on TikTok and other platforms recently. 404 Media verified they can locate specific peoples’ Tinder profiles using their photo, and found that the viral videos are produced by paid creators.
Germany considers allowing face biometric web searches by police

An expert report commissioned by AlgorithmWatch, a European digital rights organization, has pointed out the technical and legal issues with a move by Germany to expand police powers by allowing the matching of faces against photos publicly available on the internet.
There’ve been discussions in Germany in this regard as authorities are considering legislation that will make it possible for the police to conduct live biometric facial searches against online photos on social media platforms like Facebook, Instagram or LinkedIn.
The idea, which has been explored at many levels of government and also criticized means, for instance, that the police can identify an unknown suspect from surveillance camera footage by matching their live faces against internet photos.
While German authorities believe this can be done without necessarily creating a database, the technical report recently released thinks that’s not feasible.
In the report, its author and information scientist Professor Dirk Lewandowski argues that any practical system for such matching actually requires the creation of a database containing pre-collected and pre-processed facial data.
He points out that beyond the computational infeasibility of matching faces directly with random photos on the internet through live web searches, especially for one-to-many searches, creating a database for that purpose also contravenes some provisions of the EU AI Act.
Lewandowski explained that it is only technically possible to perform such live face-matching against stored biometric templates because no viable method exists to conduct large-scale facial recognition against public internet images without having in place a database.
Further explaining the aspect of the EU AI Act violation, the report notes that the legislation bans real-time remote biometric identification in public spaces by law enforcement unless that is strictly necessary and authorized for specific serious crimes.
It adds that the Act also prohibits the creation of facial recognition databases through indiscriminate scraping of publicly available images, meaning that any law in German that would that would allow the police to match faces of crime suspects against social media or other public web images, or event from a dedicated database, would violate the EU Act. The author cites the example of image search engine PimEyes which has been criticized for its reliance on databases of scrapped images, although its owners have defended the “ethical use” of the website.
The technical report is viewed as another step in efforts to prevent the normalization of mass biometric surveillance and to uphold digital rights standards prescribed by the EU.
Live facial recognition systems in use by police in other countries like the UK have also faced sharp criticisms especially over concerns related to data security and privacy.
Similar concerns are also being raised in Asia where facial recognition systems are increasingly being deployed to guarantee the safety of railway and metro passengers in that part of the world.
































