Category: Biometrics
India seeks face biometric liveness, contactless fingerprint capabilities
The Unique Identification Authority of India is introducing a new initiative and seeking deepfake and liveness detection technologies to protect Aadhaar authentication from biometric spoof attacks.
UIDAI’s Scheme for Innovation and Technology Association with Aadhaar (SITAA) calls for startups, academia and industry players to develop software domestically that can protect against deepfakes and biometric presentation attacks in real-time or close to it. The program is intended to align with India’s digital public infrastructure priorities, the authority says.
The first step in the SITAA initiative is a pilot consisting of multiple challenges, for which interested entities can apply for participation by November 14, 2025.
One challenge is to develop SDKs for active and passive face liveness detection that can prevent spoofs with photos, videos, masks, morphs, deepfakes and adversarial inputs. The software should support edge and server deployments and work with various demographics and devices. User friction should be minimized with a passive liveness-first approach, UIDAI stipulates.
Advanced presentation attack detection (PAD) solutions for Aadhaar-based face authentication are sought from academic and research institutions. The solution should be accurate, compliant with privacy requirements and scalable, and interoperable with Aadhaar APIs, as well as meeting similar criteria to the SDKs above.
Aadhaar face authentications reached 2 billion in August, just six months after the system surpassed a billion biometric transactions.
UIDIA is also seeking SDKs for authentication with contactless fingerprint biometrics using standard smartphone cameras and low-cost devices. The technology should ensure high-quality images are captured with real-time guidance, build in preprocessing and image quality checks, and apply liveness detection. The fingerprint templates generated must be interoperable with AFIS software and run efficiently on mobile and edge devices. A demo mobile app for enrollment and authentication and a quality checking and testing tool are among the required deliverables.
Under the initiative, MeitY Startup Hub will provide technical mentoring, incubation, and accelerator support. Non-profit industry group NASSCOM (National Association of Software and Service Companies) will provide industry networking, global outreach, and entrepreneurial support, according to the announcement.
The initiative is the latest in a series of steps by UIDAI to increase the use and effectiveness of biometric liveness detection within the Aadhaar ecosystem. Other recent developments along the same path include a five-year research and development deal with the ISI, and new requirements including PAD capabilities added for certification of biometric devices for use with Aadhaar at the beginning of this year.
Petition demanding UK Government roll back on introduction of digital IDs
A petition has been launched demanding that the UK Government immediately rule out the introduction of digital ID cards. Campaigners argue that such a system would mark a step towards mass surveillance and state control, and believe no one should be forced to register with a centralised, government-controlled ID. They point out that ID cards […]
The post Petition launches demanding UK Government roll back on introduction of digital ID cards, as anti arguments emerge first appeared on Identity Week.
Amadeus, Lufthansa test travel scenarios for EU Digital Identity Wallet
Amadeus and Lufthansa tested some successful travel scenarios for the EU Digital Identity Wallet this summer, which will simplify travellers’ journeys with check in, bag drop, boarding and enrolling for biometric security procedures with one tap. Mirroring big wallets transforming the payment experience, now digital and ID credentials for travel can be securely housed in […]
The post Amadeus and Lufthansa test successful travel scenarios for the EU Digital Identity Wallet first appeared on Identity Week.
Amnesty urges Scotland to ban live facial recognition for law enforcement
Amnesty International has called on the Scottish government to prohibit the use of live facial Recognition (LFR), describing the biometric technology as a “mode of mass surveillance” which is incompatible with Scotland’s human rights obligations.
In letters to Police Scotland, the Scottish Police Authority (SPA) and Scottish Justice Secretary Angela Constance, the human rights organization also requested a clear and detailed look into Police Scotland’s plans to introduce LFR and a formal assessment of its compatibility with human rights laws.
“Amnesty wants to see a ban on this technology in Scotland and globally,” writes Liz Thomson, acting Scotland programme director for Amnesty International UK.
In her letters, Thompson argues that facial recognition involves “widespread and bulk monitoring, collection, storage and analysis of biometrics-based identification at scale, without consent, and without reasonable suspicion.”
Police Scotland confirmed their decision to use live facial recognition in August. The decision was met with immediate criticism from both lawmakers and rights groups.
Fourteen rights and racial justice organisations, including Amnesty, Big Brother Watch, Privacy International and Liberty, called on the law enforcement agency to “immediately abandon” their LFR plans.
“There is no specific legislation governing police use of this technology, meaning that police forces across the UK are already deploying this technology absent of meaningful accountability or oversight,” Madeleine Stone, senior advocacy officer at Big Brother Watch, said in an August release.
The police say it is currently working on evaluating the technology and related regulation, as well as providing assurances related to bias mitigation. During its last meeting, held on September 25th, the Scottish Police Authority reiterated that the Biometrics Commissioner supports the use of LFR but noted that the public needs reassurances.
According to a survey published earlier this year, the Scottish public is split over the use of the controversial surveillance system.
In 2020, the Scottish Parliament’s Justice Sub-Committee on Policing held an inquiry into Police Scotland’s LFR plans, finding “no justifiable basis” to invest in the technology. The committee also noted that using live facial recognition would be a “radical departure from Police Scotland’s fundamental principle of policing by consent.”
The technology has been subject to legal challenge elsewhere in the UK.
The Metropolitan Police found itself in court following an incident in which Shaun Thompson, an activist campaigning against knife crime, was incorrectly identified by an LFR system. The Equality and Human Rights Commission plans to provide submissions in Thompson’s case, arguing the Met Police’s current LFR policies go against the rights laid out by the European Convention on Human Rights.
In 2020, the Court of Appeal ruled that the South Wales Police had violated privacy rights, data protection regulations, and equality legislation through the deployment of facial recognition technology.
JP Morgan’s biometric mandate signals new era of workplace surveillance
When employees begin reporting to JPMorgan Chase’s new Manhattan headquarters later this year, they will be required to submit their biometric data to enter the building. The policy, a first among major U.S. banks, makes biometric enrollment mandatory for staff assigned to the $3 billion, 60-story tower at 270 Park Avenue.
JPMorgan says the system is part of a modern security program designed to protect workers and streamline access, but it has sparked growing concern over privacy, consent, and the expanding use of surveillance technology in the workplace.
Internal communications reviewed by the Financial Times and The Guardian confirm that JPMorgan employees assigned to the new building have been told they must enroll their fingerprints or undergo an eye scan to access the premises.
Earlier drafts of the plan described the system as voluntary, but reports say that language has quietly disappeared. A company spokesperson declined to clarify how data will be stored or how long it will be retained, citing security concerns. Some staff reportedly may retain the option of using a badge instead, though the criteria for exemption remain undisclosed.
The biometric access requirement is being rolled out alongside a Work at JPMC smartphone app that doubles as a digital ID badge and internal service platform, allowing staff to order meals, navigate the building, or register visitors.
According to its listing in the Google Play Store, the app currently claims “no data collected,” though that self-reported disclosure does not replace a formal employee privacy notice.
In combination, the app and access system will allow the bank to track who enters the building, when, and potentially how long they stay on each floor, a level of visibility that, while defensible as security modernization, unsettles those wary of the creeping normalization of biometric surveillance in the workplace.
Executives have promoted the new headquarters as the “most technologically advanced” corporate campus in New York, and that it is designed to embody efficiency and safety. Reports suggest that the decision to make biometrics mandatory followed a series of high-profile crimes in Midtown, including the December 2024 killing of UnitedHealthcare CEO Brian Thompson. Within the bank, the justification has been framed as protecting employees in a volatile urban environment.
Yet, the decision thrusts JPMorgan into largely uncharted territory. No other major U.S. bank has been publicly documented as requiring its employees to submit biometric data merely to enter a headquarters building.
In other contexts, biometrics in finance have been used primarily for authentication. U.S. Bank, for example, has tested voice biometrics to replace passwords for certain customer-service and internal systems. The pilot aimed to reduce friction and fraud risk, not to manage physical access.
Other large banks, from Citigroup to Bank of America, have explored biometric technologies internally, but there is no evidence any have adopted a mandatory, company-wide biometric entry policy.
Industry analysts say that while JPMorgan’s move is unusual, it aligns with a broader pattern. “Banks and financial organizations use biometrics as internal control, securing staff access to sensitive data and restricted areas,” said an assessment by HID.
That logic – tying biometric verification to high-risk environments such as trading floors or data centers – has long guided the sector’s security philosophy. JPMorgan though is applying the same logic to an entire corporate headquarters, covering thousands of workers from senior executives to administrative staff.
The legal environment helps explain why this can happen in New York but would be riskier in states like Illinois. Unlike Illinois’s Biometric Information Privacy Act which requires written consent, retention schedules, and penalties for misuse, New York has no comparable statute regulating employer use of biometric data.
A 2021 New York City ordinance mandates signage and bans the sale of biometric identifiers in public-facing establishments but explicitly exempts financial institutions. That leaves JPMorgan’s policy largely governed by internal privacy statements and whatever contractual assurances exist with its technology vendors.
In its London offices, JPMorgan already uses a voluntary hand-geometry system to control access to secure zones which the company says stores only encrypted templates that it cannot reverse-engineer.
The mandatory program in New York appears to build on that experience, though the bank has not released technical details about encryption, storage, or data segregation between systems. Nor has it disclosed whether a third-party vendor will manage the biometric templates or if they will be housed on JPMorgan servers.
Critics warn that the bank’s decision could normalize coercive data collection across white-collar workplaces. Biometric identifiers are immutable. Once compromised, they cannot be replaced like a password or badge.
Labor-rights attorneys note that, even if employees technically consent, the choice is illusory when access to one’s job depends on enrollment. They also point out that biometric logs could theoretically be correlated with productivity or attendance data, creating a new vector for workplace monitoring.
Still, corporate adoption is accelerating, propelled by a security industry eager to market “frictionless” access control. Vendors pitch biometrics as faster and more reliable than keycards or PINs, eliminating lost credentials and streamlining compliance audits.
In sectors handling large financial transactions, the case for stronger authentication is easy to make. For banks that have weathered cyber-attacks and insider threats, the allure of definitive identity verification is powerful.
The unanswered questions revolve around governance and accountability. Will JPMorgan publish a formal biometric privacy policy for employees, outlining how long data is retained and under what conditions it will be deleted? Who audits the system? What rights do workers have to challenge inaccuracies or demand erasure? None of that is public.
The bank has remained silent even as press coverage intensifies.
Aurora, Colorado police move closer to using facial recognition technology
After years of internal planning and public debate about biometric surveillance, the Aurora Police Department (APD) has asked city leaders to formally allow it to use facial recognition software in criminal investigations, a move that has sparked questions about privacy, accuracy, and the proper limits of law enforcement technology.
The two systems are under consideration are ROC’s facial recognition service within LexisNexis’ Lumen/AVCC software platform, and Clearview AI. The department recently posted draft accountability reports for both systems to its website, opening a public comment period as required under Colorado law.
Under the draft proposal, Aurora would deploy facial recognition only in after-the-fact investigations when detectives already have reason to believe a crime has occurred and seek to identify persons of interest, witnesses, or victims.
The department emphasizes that algorithmic matches would not serve as probable cause for arrest. Instead, they would act as investigative leads requiring corroboration through traditional detective work.
To guard against misuse, Aurora’s draft policy outlines several layers of human review. Any candidate list produced by the software must be screened by an operator, subjected to peer review, and approved by a supervisor before a lead reaches detectives. The policy also prohibits live or continuous surveillance, immigration enforcement, harassment, and persistent tracking without a court order.
These provisions reflect Senate Bill 22-113, a 2022 Colorado law that created statewide guardrails for government use of facial recognition. The statute requires public notice and publication of accountability reports, formal human review of any outcome that could affect an individual, and a public comment process before deployment.
The law does not automatically authorize all forms of facial recognition. It prohibits real-time or continuous tracking unless justified by a warrant or court order, and mandates that agencies demonstrate necessity and oversight mechanisms.
In Aurora’s case, this means the city council must still approve any deployment, the department must host multiple public meetings, and the public comment period must remain open for at least 90 days.
While APD has discussed potential budget impacts with city staff, detailed cost projections have not been finalized publicly. Earlier internal estimates referenced a multi-year implementation plan and limited initial access for trained detectives, but those figures have not been confirmed in official city documents.
Aurora already uses LexisNexis’ Lumen system for information sharing with other Colorado law-enforcement agencies, which could allow integration without significant new infrastructure costs.
Even as city staff argue that facial recognition could improve investigative efficiency, opposition has surfaced over transparency, algorithmic bias, and the potential erosion of civil liberties.
Councilmember Alison Coombs, the lone dissenting voice when a committee advanced the proposal, warned that its “scope is extremely broad” and raised concerns about consent and access to personal data. Councilmember Curtis Gardner questioned the risk of wrongful identification, citing national cases where misidentifications led to innocent people being detained.
Supporters counter that state law and Aurora’s internal policy ensure multiple layers of human review and corroboration, but privacy advocates remain skeptical. The American Civil Liberties Union of Colorado’s Public Policy Director, Anaya Robinson, said facial recognition systems “have repeatedly misidentified women, people of color, and individuals with disabilities,” adding that “even when human review is required, the bias built into these systems can produce unjust outcomes.”
The two systems under review differ sharply in data scope. Lumen’s version, integrated with the Colorado Information Sharing Consortium, draws primarily from criminal-justice databases already accessible to law enforcement. Clearview AI, by contrast, scrapes billions of images from the open internet, raising deeper questions about consent, privacy, and the accuracy of matches generated from uncontrolled, real-world photographs.
Aurora’s draft accountability reports state that any use of facial recognition would be subject to internal audits and could be suspended if error rates or disparities exceed acceptable thresholds. However, the specific numerical benchmarks for such suspensions have not yet been published.
Colorado’s regulatory environment for biometric and AI technologies continues to evolve. In 2024, lawmakers passed House Bill 24-1130, a sweeping biometric privacy and AI accountability measure that took effect in July.
The law strengthens requirements for explicit consent, limits how public agencies and private entities can share or retain biometric identifiers, including facial templates, and mandates clear disclosures when automated decision systems are used in ways that affect individuals’ rights or access to services.
Together, SB 22-113 and HB 24-1130 place Colorado among the most tightly regulated states for facial recognition and biometric data, even as cities like Aurora weigh whether to authorize such technologies at the municipal level.
The proposal has cleared an initial city-council committee and is scheduled for formal readings before the full council, along with additional opportunities for public comment. If ultimately approved, Aurora would join a small number of Colorado jurisdictions that have explicitly authorized police use of facial recognition under the state’s regulatory framework.