New York City lawmakers push sweeping restrictions on private sector biometric surveillance

New York City lawmakers push sweeping restrictions on private sector biometric surveillance
New York City lawmakers are weighing a sweeping new attempt to curb the spread of biometric surveillance in everyday life, advancing legislation that would sharply restrict the ability of businesses and landlords to collect facial scans, voiceprints, fingerprints, and other uniquely identifying data from the public.

The proposals were the subject of a lengthy hearing this week before the City Council’s Committee on Technology where councilmembers, privacy advocates, and city officials debated whether biometric technologies have quietly expanded into retail stores and residential buildings with little oversight.

At issue were two bills that together would make New York City one of the most restrictive jurisdictions in the U.S. when it comes to private sector biometric surveillance.

The push reflects a growing concern among lawmakers that the technology has moved beyond narrowly defined security uses and is beginning to reshape the way businesses monitor customers and tenants.

Councilmember Shahana Hanif, the sponsor of one of the bills, argued that biometric identifiers present a fundamentally different privacy risk than ordinary data.

“You cannot cancel your face,” Hanif said during the hearing, emphasizing that biometric identifiers cannot be replaced once compromised.

The legislation responds in part to revelations that some retailers have begun experimenting with facial recognition systems designed to identify suspected shoplifters or repeat offenders.

One example frequently cited by lawmakers involves grocery chain Wegmans, which has acknowledged using facial recognition technology in certain locations as part of its loss prevention strategy.

Under the first proposal, businesses that qualify as places of public accommodation would be barred from using biometric recognition technology to identify or verify customers.

The measure would go far beyond the city’s current rules, which mainly require businesses to post signs notifying customers if biometric information is being collected.

The bill would also expand how biometric identifiers are defined under city law. The definition would include not only fingerprints and iris scans but also voiceprints, facial geometry, and even movement patterns that could be used to identify an individual.

A second proposal introduced by Councilmember Pierina Ana Sanchez focuses on residential buildings. It would prohibit landlords from installing or using biometric recognition systems that identify tenants or their guests.

Lawmakers behind the measure say the growing use of facial recognition door entry systems in apartment buildings raises serious concerns about tenant privacy and potential surveillance inside private residences.

Together, the bills reflect a broader shift in the debate over biometric technology.

Earlier efforts in New York primarily focused on transparency, requiring businesses to disclose when they were collecting biometric data. The new proposals instead move toward outright prohibition.

Advocates for the legislation say that shift is necessary because disclosure alone has done little to slow adoption.

Civil liberties groups have warned for years that biometric surveillance systems can enable continuous tracking of people’s movements and associations.

Once deployed in retail stores or apartment buildings, they argue, such systems can quietly build databases of people who have done nothing wrong.

The debate gained additional momentum following several high-profile incidents in New York. In one widely publicized case, Madison Square Garden used facial recognition to identify and deny entry to lawyers affiliated with firms that were involved in litigation against the venue’s owner.

The incident highlighted how biometric tools could be used not just for security but also for private blacklisting.

At the same time, retailers have argued that biometric tools are becoming increasingly important for security as organized retail theft has grown more sophisticated.

Companies deploying the technology say facial recognition allows them to identify repeat offenders and prevent theft without requiring constant manual monitoring by security staff.

That argument did not persuade many councilmembers at the hearing, where lawmakers repeatedly pressed city officials and industry representatives about the risks of misidentification and data misuse.

The hearing also revealed gaps in the city’s own understanding of how biometric technologies are being used.

During testimony, representatives from the New York City Office of Technology and Innovation (OTI) acknowledged that the city does not maintain a comprehensive inventory of biometric data collection across agencies.

Alex Foard, OTI assistant commissioner of research and collaboration explained that the office only tracks certain technologies reported under Local Law 35, a 2022 New York City regulation requiring city agencies to annually disclose their use of automated, AI, or algorithmic tools that impact the public.

That disclosure system though does not capture every instance in which biometric data may be collected or stored. Some uses may fall outside the reporting framework, meaning that even city officials cannot fully account for how the technology is deployed across government.

“I do want to indicate that agencies could be using biometric data in ways that aren’t involved in algorithmic decision making or AI or other uses, in which case we would not have visibility into that collection,” Foard said.

The lack of clarity troubled several councilmembers, who argued that if the city government itself cannot fully track biometric technologies, it becomes even harder to regulate their use in the private sector.

The Office of Technology and Innovation did not take a formal position on the proposed bans but acknowledged the complexity of regulating rapidly evolving surveillance tools.

In written testimony submitted to the committee, the agency said it supports efforts to strengthen privacy protections while ensuring that legitimate uses of technology can still be evaluated carefully.

The debate also reflects a broader national trend. Across the U.S., lawmakers are grappling with how to regulate biometric systems that can identify people automatically through cameras, microphones, or other sensors.

Many of the existing laws focus on notice and consent requirements, requiring companies to disclose when biometric data is collected. Illinois’ Biometric Information Privacy Act, for example, requires written consent before companies can gather biometric identifiers.

New York City already has a limited version of that approach. Current city law requires businesses that collect biometric information to notify customers through posted signs, but it does not prohibit the practice itself.

Supporters of the new legislation argue that those transparency requirements have proven insufficient. They point out that most consumers either do not notice the signs or do not understand the implications of biometric tracking systems that can log and analyze their movements across multiple visits.

Opponents, however, warn that an outright ban could create unintended consequences.

Retail industry groups say the technology can help prevent theft and improve security for both employees and customers. Landlords have also argued that biometric entry systems can be more secure than traditional key fobs or passcodes, which can easily be copied or shared.

Still, the political momentum in New York appears to be shifting toward stronger restrictions.

Privacy advocates argue that facial recognition and similar tools create the infrastructure for constant monitoring, allowing private actors to track people’s movements through stores, apartment buildings, and other everyday spaces.

For councilmembers pushing more restrictive legislation, the stakes are about more than just consumer privacy. They see biometric surveillance as a technology that could fundamentally alter the relationship between individuals and the spaces they inhabit, turning routine activities such as shopping or entering one’s apartment building into moments of automated identification.

United Airlines can permanently ban passengers who don’t wear headphones

United Airlines has updated its “Contract of Carriage” to include a line that requires passengers to wear headphones while listening to audio and video content on flights, CBS News reports. Under the updated contract, United can “refuse transport on a permanent or temporary basis” to passengers who don’t follow a list of rules, which now […]

Integrated Biometrics fingerprint scanners facilitate digital ID for inclusion in Ethiopia

Integrated Biometrics fingerprint scanners facilitate digital ID for inclusion in Ethiopia
Kojak fingerprint scanners from Integrated Biometrics are playing a major role in the enrollment of citizens for Ethiopia’s Fayda digital ID, in a move that is advancing digital inclusion of the country’s total population.

Since 2023, Ethiopia has been implementing the Digital ID for Inclusion and Services Project with funding from the World Bank.

More than 30 million people have so far been registered for the ID initiative, with a plan by the Ethiopia National Identity Program (NIDP) to reach 90 million people by the end of 2027. Over 90 agencies have also integrated their services with the digital ID system, making identity verification and authentication easier.

Integrated Biometrics is one of the technology partners supporting the project, and the U.S. firm says in a case study that NIDP is using its Kojak scanner for fingerprint capture during identity enrollment. The scanner is MOSIP-compliant.

“This lightweight scanner (725 grams) rapidly collects prints from dry fingers and can operate in direct sunlight and extreme temperatures. With very low power consumption, the registration kit can operate for lengthy periods of time without an electrical connection,” the company writes, adding that to reach NIDP’s enrollment target, the durability of Kojak scanners, “which exceeds US military standards, is critical.”

The case study recalls the challenges Ethiopians faced having to access public services without a legal or digital identity in the past. It notes that the Fayda digital ID does not only facilitate access to a wide range of services, but is also an important tool to support Ethiopia’s 2030 digital transformation strategy and digital economy growth.

The characteristics of the scanners, according to Integrated Biometrics, is contributing to the success rate of Fayda enrollment as it makes it possible for NIDP to “enroll participants as close as possible to their homes, increasing the likelihood of successful registration.”

Efforts are also being multiplied by the government to make sure the national digital ID is issued to refugees to give them a sense of inclusion in the Ethiopian society, something the UNHCR lauded early this year.

With the Fayda digital ID, Ethiopians now easily have access to a wide range of services from public institutions and the private sector.

In the capital Addis Ababa, the Fayda has been linked with the city’s residency card system, which means that residents no longer need to submit biometrics anew when applying for a residency card, according to Addis Fortune.

There’s also a policy from the country’s central bank for all commercial banks in the country to integrate customer accounts with their Fayda ID details, and the official deadline for the directive to be fully complied with is March 30.

Tycoon 2FA phishing empire dismantled in global cybercrime crackdown

Tycoon 2FA phishing empire dismantled in global cybercrime crackdown
A sprawling cybercrime platform that helped thousands of attackers bypass modern authentication protections has been disrupted in a coordinated global operation led by technology companies, cybersecurity researchers and law enforcement agencies.

The takedown targeted Tycoon 2FA, one of the most prolific phishing-as-a-service platforms in operation in recent years, and underscores how identity-based attacks have become a central battleground in modern cybersecurity.

As organizations move more operations into cloud platforms and rely on digital identities to manage access, phishing campaigns that compromise those identities can provide attackers with direct entry into critical systems.

Through a combination of legal action, infrastructure seizures and cross-border intelligence sharing, the coalition dismantled key parts of the service’s technical backbone and seized hundreds of domains used to support its campaigns.

The operation illustrates both the scale of the modern phishing economy and the increasingly coordinated efforts required to disrupt it.

At the center of the disruption effort was a partnership involving Microsoft, Europol, and a broad set of cybersecurity companies and nonprofit organizations.

The coalition included firms such as Trend Micro, Cloudflare, Intel471, Proofpoint, SpyCloud and Coinbase, along with intelligence sharing groups and law enforcement agencies from multiple European countries.

The combined effort targeted the infrastructure powering Tycoon 2FA’s operations, including the domains used to host phishing pages and administrative panels.

As part of the operation, investigators seized roughly 330 domains that formed the core of the service’s infrastructure. These domains hosted control panels used by cybercriminals as well as fake login pages designed to harvest credentials from victims.

The seizures were carried out under a court order in the United States and supported by coordinated enforcement actions in several European jurisdictions.

Tycoon 2FA had emerged as one of the most significant drivers of phishing activity worldwide since it appeared around 2023. The platform allowed criminals to conduct sophisticated attacks that could defeat multi-factor authentication, the security measure widely adopted by organizations to protect accounts beyond a simple password.

By mid-2025, the service was responsible for about 62 percent of the phishing attempts blocked by Microsoft’s systems, with some months seeing more than 30 million malicious emails sent through the infrastructure.

The impact was substantial. Researchers estimate that the service has been linked to roughly 96,000 victims globally, including tens of thousands of Microsoft customers whose accounts were targeted or compromised.

Healthcare organizations, schools and universities were among the hardest hit sectors, with phishing campaigns disrupting operations and exposing sensitive data.

The platform’s success lay in its design. Tycoon 2FA operated as an adversary-in-the-middle phishing system, a technique that intercepts communication between a victim and a legitimate service during the login process.

When a user entered their credentials and responded to authentication prompts, the system relayed that information in real time to the actual service while simultaneously capturing passwords, authentication codes and session cookies.

Those stolen session tokens allowed attackers to log in to accounts even if the password was later changed, unless all active sessions were revoked. This approach effectively undermined traditional multi-factor authentication protections, which were designed to stop attackers who only possess a password.

By capturing authentication tokens generated during a valid login session, the Tycoon 2FA infrastructure allowed attackers to assume the identity of legitimate users and move through systems without triggering many security alerts.

The service also lowered the barrier to entry for cybercrime. Tycoon 2FA operated as a subscription-based phishing-as-a-service platform, meaning criminals could rent access to the toolkit without needing deep technical skills.

The system provided prebuilt phishing templates that mimicked widely used services such as Microsoft 365 and Google Workspace, along with hosting infrastructure and dashboards for managing campaigns and viewing stolen credentials.

This model reflects a broader trend in the cybercrime ecosystem where specialized services are sold or leased in underground markets to enable large-scale attacks.

Instead of building tools themselves, attackers can purchase ready-made capabilities including phishing kits, malware distribution services, hosting infrastructure and stolen credentials. The result is an interconnected economy that functions much like a legitimate technology supply chain.

Investigators said Tycoon 2FA fit squarely within that ecosystem. The service was reportedly marketed and managed through encrypted messaging platforms such as Telegram and supported by partners responsible for payments, marketing and technical support.

Other illicit services handled mass email distribution or provided the servers used to host phishing infrastructure, allowing the entire operation to scale quickly. Trend Micro researchers who tracked the platform say its infrastructure included thousands of domains and supported a global network of operators.

The service generated enormous volumes of phishing traffic, delivering campaigns targeting enterprises, governments and individuals across multiple continents.

Analysis of victim data also illustrates the breadth of the threat. Intelligence gathered from exposed Tycoon 2FA panels revealed hundreds of thousands of captured credentials and authentication records.

Most of the compromised accounts were tied to corporate email domains rather than free consumer email providers, underscoring the platform’s focus on enterprise environments where access to a single account can open pathways into larger organizational systems.

For attackers, those compromised accounts often served as the starting point for broader intrusions.

Once inside an organization’s email or cloud collaboration environment, criminals could conduct business email compromise scams, steal sensitive data, or use the account to launch additional phishing campaigns targeting colleagues and partners. In some cases, access obtained through phishing operations later facilitated ransomware deployments.

The takedown effort also demonstrates how technology companies increasingly use civil litigation alongside traditional law enforcement methods to disrupt cybercrime infrastructure.

In this case, Microsoft’s Digital Crimes Unit filed a civil complaint in federal court to obtain legal authority to seize domains associated with the platform. The action was supported by threat intelligence gathered by private security companies and shared with international law enforcement agencies.

Europol played a central coordinating role through its Cyber Intelligence Extension Program, which is designed to move beyond intelligence sharing toward direct operational collaboration between governments and the private sector.

Authorities in countries including Latvia, Lithuania, Portugal, Poland, Spain and the United Kingdom participated in enforcement actions connected to the case.

Cybersecurity researchers emphasize that while such operations can significantly disrupt cybercrime infrastructure, they rarely eliminate it entirely. Platforms like Tycoon 2FA are part of a broader ecosystem in which new tools quickly emerge to replace those that are shut down.

Nonetheless, investigators say the dismantling of widely used services can have cascading effects by forcing attackers to rebuild infrastructure and raising the cost and complexity of their operations.

AI fraud pushing pace on need for advanced deepfake detection tools

AI fraud pushing pace on need for advanced deepfake detection tools
A blog post for GetReal Security by Dr. Edward Amoros, CEO of TAG Infosphere and research professor at NYU, looks at how and why information executives can justify investment in deepfake detection.

“Executives fund technologies based on initiatives that demonstrate measurable risk reduction, economic value, and alignment with governance and compliance objectives,” Amoros says. “Deepfake and continuous identity protection programs must therefore be framed not as experimental controls, but as ROI-driven investments.”

Amoros says traditional cyber risk frameworks such as FAIR and ISO/IEC 27005 are applicable to deepfake and synthetic media threats, “even though the attack vector is new.”

“In both frameworks, risk is defined as a function of loss event frequency and loss magnitude. Deepfakes map cleanly into these structures once they are properly categorized as identity-based loss events rather than media anomalies.”

This reframing allows chief information security officers (CISOs) to “integrate deepfake risk into existing enterprise risk registers, rather than treating it as a parallel or experimental concern.”

Moreover, it makes economic sense. “On the investment side, deepfake protection costs are typically modest relative to these potential losses,” Amoros says. “Detection platforms, continuous identity assurance tooling, and integration into collaboration environments represent a fraction of what organizations already spend on IAM, SOC operations, or fraud prevention. The economic argument becomes one of loss avoidance rather than productivity enhancement.”

There is an increasing need to maintain agile compliance postures and auditability. But, “to sustain executive support, CISOs must define metrics that move beyond simple detection counts. Useful measures include detection accuracy across voice and video channels, coverage across high-risk workflows, false positive rates, and mean time to response for identity anomalies.”

Amoros says the call to action is clear. “Enterprise security teams should begin to treat identity authenticity as a measurable control objective. When CISOs can quantify identity protection, they can justify it, and when they justify it, they can finally defend it at scale.”

DataVisor identifies AI readiness gap between concern and defense

DataVisor’s 2026 Fraud & AML Executive Report reveals a readiness gap between concern over AI-driven fraud and financial institutions’ ability to defend against it.

According to a release, the report shows 74 percent of surveyed senior fraud and AML leaders across banks, credit unions, fintechs and digital payments platforms citing AI-driven fraud as a top threat. But 67 percent say their organizations lack the infrastructure to deploy effective AI defenses.

Fragmented data, legacy detection models, organizational silos and outdated governance compromise defenses.

“Financial institutions are facing attackers that operate at machine speed, but many defenses still operate at legacy operational speed,” says Yinglian Xie, CEO of DataVisor. “Closing the AI Readiness Gap requires modern foundations – unified data, adaptive machine learning, adoption of LLM-based AI agents, and operational models designed for continuous, real-time response. The organizations that modernize their infrastructure and workflows will be best positioned to stay ahead.

Paper points to future in which AI cloning tools make voice biometrics obsolete 

The Bloomsbury Intelligence and Security Institute (BISI) has collaborated with CyberWomen Groups C.I.C. on the publication of “When Voice Is No Longer Proof: AI Vocal Cloning and the Limits of Voice-Based Authentication,” a paper by Hannah-Rose Shearman, a student researcher in cybersecurity and digital forensics.

CyberWomen Groups C.I.C. is “a student-led initiative dedicated to diversifying STEM by supporting and connecting university students interested in or studying cybersecurity, regardless of gender identity.”

Shearman’s paper looks at how the increasing accessibility and realism of synthetic voice technology undermines the effectiveness of voice biometrics, creating significant vulnerabilities in voice-based verification, and driving high-risk sectors such as finance and public services away from voice as a reliable biometric identifier.

“Academic and industry research has demonstrated that modern voice cloning tools can replicate vocal characteristics with high fidelity and bypass traditional authentication controls,” Shearman writes.

“Techniques that were previously resource-intensive and difficult to scale can now be performed with minimal technical expertise, owing to the increasing accessibility of commercial voice synthesis platforms such as ElevenLabs, thereby weakening long-standing assumptions about the uniqueness and reliability of voice as a standalone proof of identity.”

In short, voice used to be considered secure. But the tools for replicating voices, or generating fake ones, have gotten good enough that voices can no longer be trusted. And with social media, there is ample reference material for cloning.

That has major implications for the “political, operational, security, and economic risk landscape for institutions that rely on voice-based authentication.”

“Elevated fraud activity results in direct financial losses and increased costs for investigation, remediation, and customer support,” says Shearman. Eventually, these outweigh the benefits of using voice authentication in the first place.

“At a political and regulatory level, the growing divergence between institutional authentication practices and emerging AI capabilities exposes gaps in consumer protection and accountability frameworks.”

“While organisations continue to deploy voice biometrics as a trusted control, victims of AI-enabled impersonation encounter limited avenues for redress in jurisdictions where legal frameworks have not yet adapted to address synthetic voice misuse.”

The paper concludes with a forecast, which calls for higher volumes of impersonation attempts and related pressures over the next 12 months as AI voice cloning continues to develop – and, over the long term, continued “erosion of trust in voice as a primary identity control.”

‘It’s Physical, Not Intellectual’: The Ethics of Correcting Assumptions About Disability

Graduate Highly Commended paper in the 2026 National Uehiro Oxford Essay Prize in Practical Ethics. By James Forsdyke. ORCID: https://orcid.org/0009-0008-2446-4586 In this paper, I will discuss ethical considerations when it comes to correcting assumptions about people with physical disabilities. In particular, I will discuss the assumption some individuals make that physical disabilities are necessarily accompanied… Read More »‘It’s Physical, Not Intellectual’: The Ethics of Correcting Assumptions About Disability

The post ‘It’s Physical, Not Intellectual’: The Ethics of Correcting Assumptions About Disability first appeared on Practical Ethics.