Category: Ethics
Germany considers allowing face biometric web searches by police
An expert report commissioned by AlgorithmWatch, a European digital rights organization, has pointed out the technical and legal issues with a move by Germany to expand police powers by allowing the matching of faces against photos publicly available on the internet.
There’ve been discussions in Germany in this regard as authorities are considering legislation that will make it possible for the police to conduct live biometric facial searches against online photos on social media platforms like Facebook, Instagram or LinkedIn.
The idea, which has been explored at many levels of government and also criticized means, for instance, that the police can identify an unknown suspect from surveillance camera footage by matching their live faces against internet photos.
While German authorities believe this can be done without necessarily creating a database, the technical report recently released thinks that’s not feasible.
In the report, its author and information scientist Professor Dirk Lewandowski argues that any practical system for such matching actually requires the creation of a database containing pre-collected and pre-processed facial data.
He points out that beyond the computational infeasibility of matching faces directly with random photos on the internet through live web searches, especially for one-to-many searches, creating a database for that purpose also contravenes some provisions of the EU AI Act.
Lewandowski explained that it is only technically possible to perform such live face-matching against stored biometric templates because no viable method exists to conduct large-scale facial recognition against public internet images without having in place a database.
Further explaining the aspect of the EU AI Act violation, the report notes that the legislation bans real-time remote biometric identification in public spaces by law enforcement unless that is strictly necessary and authorized for specific serious crimes.
It adds that the Act also prohibits the creation of facial recognition databases through indiscriminate scraping of publicly available images, meaning that any law in German that would that would allow the police to match faces of crime suspects against social media or other public web images, or event from a dedicated database, would violate the EU Act. The author cites the example of image search engine PimEyes which has been criticized for its reliance on databases of scrapped images, although its owners have defended the “ethical use” of the website.
The technical report is viewed as another step in efforts to prevent the normalization of mass biometric surveillance and to uphold digital rights standards prescribed by the EU.
Live facial recognition systems in use by police in other countries like the UK have also faced sharp criticisms especially over concerns related to data security and privacy.
Similar concerns are also being raised in Asia where facial recognition systems are increasingly being deployed to guarantee the safety of railway and metro passengers in that part of the world.
India seeks face biometric liveness, contactless fingerprint capabilities
The Unique Identification Authority of India is introducing a new initiative and seeking deepfake and liveness detection technologies to protect Aadhaar authentication from biometric spoof attacks.
UIDAI’s Scheme for Innovation and Technology Association with Aadhaar (SITAA) calls for startups, academia and industry players to develop software domestically that can protect against deepfakes and biometric presentation attacks in real-time or close to it. The program is intended to align with India’s digital public infrastructure priorities, the authority says.
The first step in the SITAA initiative is a pilot consisting of multiple challenges, for which interested entities can apply for participation by November 14, 2025.
One challenge is to develop SDKs for active and passive face liveness detection that can prevent spoofs with photos, videos, masks, morphs, deepfakes and adversarial inputs. The software should support edge and server deployments and work with various demographics and devices. User friction should be minimized with a passive liveness-first approach, UIDAI stipulates.
Advanced presentation attack detection (PAD) solutions for Aadhaar-based face authentication are sought from academic and research institutions. The solution should be accurate, compliant with privacy requirements and scalable, and interoperable with Aadhaar APIs, as well as meeting similar criteria to the SDKs above.
Aadhaar face authentications reached 2 billion in August, just six months after the system surpassed a billion biometric transactions.
UIDIA is also seeking SDKs for authentication with contactless fingerprint biometrics using standard smartphone cameras and low-cost devices. The technology should ensure high-quality images are captured with real-time guidance, build in preprocessing and image quality checks, and apply liveness detection. The fingerprint templates generated must be interoperable with AFIS software and run efficiently on mobile and edge devices. A demo mobile app for enrollment and authentication and a quality checking and testing tool are among the required deliverables.
Under the initiative, MeitY Startup Hub will provide technical mentoring, incubation, and accelerator support. Non-profit industry group NASSCOM (National Association of Software and Service Companies) will provide industry networking, global outreach, and entrepreneurial support, according to the announcement.
The initiative is the latest in a series of steps by UIDAI to increase the use and effectiveness of biometric liveness detection within the Aadhaar ecosystem. Other recent developments along the same path include a five-year research and development deal with the ISI, and new requirements including PAD capabilities added for certification of biometric devices for use with Aadhaar at the beginning of this year.
Petition demanding UK Government roll back on introduction of digital IDs
A petition has been launched demanding that the UK Government immediately rule out the introduction of digital ID cards. Campaigners argue that such a system would mark a step towards mass surveillance and state control, and believe no one should be forced to register with a centralised, government-controlled ID. They point out that ID cards […]
The post Petition launches demanding UK Government roll back on introduction of digital ID cards, as anti arguments emerge first appeared on Identity Week.
Researchers complete Europe’s first BCI implantation in quadriplegic
A team at the Technical University of Munich (TUM) University Hospital in Munich, Germany has implanted a brain-computer interface (BCI) in a patient paralysed from the neck down, marking the first procedure of this kind to be performed in Europe. The TUM team hopes to enable Michael Mehringer, the 25-year-old man who received this BCI […]
The post Researchers complete Europe’s first BCI implantation in quadriplegia patient appeared first on NeuroNews International.
Amnesty urges Scotland to ban live facial recognition for law enforcement
Amnesty International has called on the Scottish government to prohibit the use of live facial Recognition (LFR), describing the biometric technology as a “mode of mass surveillance” which is incompatible with Scotland’s human rights obligations.
In letters to Police Scotland, the Scottish Police Authority (SPA) and Scottish Justice Secretary Angela Constance, the human rights organization also requested a clear and detailed look into Police Scotland’s plans to introduce LFR and a formal assessment of its compatibility with human rights laws.
“Amnesty wants to see a ban on this technology in Scotland and globally,” writes Liz Thomson, acting Scotland programme director for Amnesty International UK.
In her letters, Thompson argues that facial recognition involves “widespread and bulk monitoring, collection, storage and analysis of biometrics-based identification at scale, without consent, and without reasonable suspicion.”
Police Scotland confirmed their decision to use live facial recognition in August. The decision was met with immediate criticism from both lawmakers and rights groups.
Fourteen rights and racial justice organisations, including Amnesty, Big Brother Watch, Privacy International and Liberty, called on the law enforcement agency to “immediately abandon” their LFR plans.
“There is no specific legislation governing police use of this technology, meaning that police forces across the UK are already deploying this technology absent of meaningful accountability or oversight,” Madeleine Stone, senior advocacy officer at Big Brother Watch, said in an August release.
The police say it is currently working on evaluating the technology and related regulation, as well as providing assurances related to bias mitigation. During its last meeting, held on September 25th, the Scottish Police Authority reiterated that the Biometrics Commissioner supports the use of LFR but noted that the public needs reassurances.
According to a survey published earlier this year, the Scottish public is split over the use of the controversial surveillance system.
In 2020, the Scottish Parliament’s Justice Sub-Committee on Policing held an inquiry into Police Scotland’s LFR plans, finding “no justifiable basis” to invest in the technology. The committee also noted that using live facial recognition would be a “radical departure from Police Scotland’s fundamental principle of policing by consent.”
The technology has been subject to legal challenge elsewhere in the UK.
The Metropolitan Police found itself in court following an incident in which Shaun Thompson, an activist campaigning against knife crime, was incorrectly identified by an LFR system. The Equality and Human Rights Commission plans to provide submissions in Thompson’s case, arguing the Met Police’s current LFR policies go against the rights laid out by the European Convention on Human Rights.
In 2020, the Court of Appeal ruled that the South Wales Police had violated privacy rights, data protection regulations, and equality legislation through the deployment of facial recognition technology.
JP Morgan’s biometric mandate signals new era of workplace surveillance
When employees begin reporting to JPMorgan Chase’s new Manhattan headquarters later this year, they will be required to submit their biometric data to enter the building. The policy, a first among major U.S. banks, makes biometric enrollment mandatory for staff assigned to the $3 billion, 60-story tower at 270 Park Avenue.
JPMorgan says the system is part of a modern security program designed to protect workers and streamline access, but it has sparked growing concern over privacy, consent, and the expanding use of surveillance technology in the workplace.
Internal communications reviewed by the Financial Times and The Guardian confirm that JPMorgan employees assigned to the new building have been told they must enroll their fingerprints or undergo an eye scan to access the premises.
Earlier drafts of the plan described the system as voluntary, but reports say that language has quietly disappeared. A company spokesperson declined to clarify how data will be stored or how long it will be retained, citing security concerns. Some staff reportedly may retain the option of using a badge instead, though the criteria for exemption remain undisclosed.
The biometric access requirement is being rolled out alongside a Work at JPMC smartphone app that doubles as a digital ID badge and internal service platform, allowing staff to order meals, navigate the building, or register visitors.
According to its listing in the Google Play Store, the app currently claims “no data collected,” though that self-reported disclosure does not replace a formal employee privacy notice.
In combination, the app and access system will allow the bank to track who enters the building, when, and potentially how long they stay on each floor, a level of visibility that, while defensible as security modernization, unsettles those wary of the creeping normalization of biometric surveillance in the workplace.
Executives have promoted the new headquarters as the “most technologically advanced” corporate campus in New York, and that it is designed to embody efficiency and safety. Reports suggest that the decision to make biometrics mandatory followed a series of high-profile crimes in Midtown, including the December 2024 killing of UnitedHealthcare CEO Brian Thompson. Within the bank, the justification has been framed as protecting employees in a volatile urban environment.
Yet, the decision thrusts JPMorgan into largely uncharted territory. No other major U.S. bank has been publicly documented as requiring its employees to submit biometric data merely to enter a headquarters building.
In other contexts, biometrics in finance have been used primarily for authentication. U.S. Bank, for example, has tested voice biometrics to replace passwords for certain customer-service and internal systems. The pilot aimed to reduce friction and fraud risk, not to manage physical access.
Other large banks, from Citigroup to Bank of America, have explored biometric technologies internally, but there is no evidence any have adopted a mandatory, company-wide biometric entry policy.
Industry analysts say that while JPMorgan’s move is unusual, it aligns with a broader pattern. “Banks and financial organizations use biometrics as internal control, securing staff access to sensitive data and restricted areas,” said an assessment by HID.
That logic – tying biometric verification to high-risk environments such as trading floors or data centers – has long guided the sector’s security philosophy. JPMorgan though is applying the same logic to an entire corporate headquarters, covering thousands of workers from senior executives to administrative staff.
The legal environment helps explain why this can happen in New York but would be riskier in states like Illinois. Unlike Illinois’s Biometric Information Privacy Act which requires written consent, retention schedules, and penalties for misuse, New York has no comparable statute regulating employer use of biometric data.
A 2021 New York City ordinance mandates signage and bans the sale of biometric identifiers in public-facing establishments but explicitly exempts financial institutions. That leaves JPMorgan’s policy largely governed by internal privacy statements and whatever contractual assurances exist with its technology vendors.
In its London offices, JPMorgan already uses a voluntary hand-geometry system to control access to secure zones which the company says stores only encrypted templates that it cannot reverse-engineer.
The mandatory program in New York appears to build on that experience, though the bank has not released technical details about encryption, storage, or data segregation between systems. Nor has it disclosed whether a third-party vendor will manage the biometric templates or if they will be housed on JPMorgan servers.
Critics warn that the bank’s decision could normalize coercive data collection across white-collar workplaces. Biometric identifiers are immutable. Once compromised, they cannot be replaced like a password or badge.
Labor-rights attorneys note that, even if employees technically consent, the choice is illusory when access to one’s job depends on enrollment. They also point out that biometric logs could theoretically be correlated with productivity or attendance data, creating a new vector for workplace monitoring.
Still, corporate adoption is accelerating, propelled by a security industry eager to market “frictionless” access control. Vendors pitch biometrics as faster and more reliable than keycards or PINs, eliminating lost credentials and streamlining compliance audits.
In sectors handling large financial transactions, the case for stronger authentication is easy to make. For banks that have weathered cyber-attacks and insider threats, the allure of definitive identity verification is powerful.
The unanswered questions revolve around governance and accountability. Will JPMorgan publish a formal biometric privacy policy for employees, outlining how long data is retained and under what conditions it will be deleted? Who audits the system? What rights do workers have to challenge inaccuracies or demand erasure? None of that is public.
The bank has remained silent even as press coverage intensifies.