Tag: FACIALRECOGNITION
Underground Facial Recognition Tool Unmasks Camgirls
The site, camgirlfinder, is explicitly built as a tool to let people find a model’s presence on other streaming platforms. The creator says “If that is a problem for you then the sad reality is this job is not for you.”
UK police begins live facial recognition trials at railway stations

The UK police have kicked off a six-month pilot using live facial recognition (LFR) surveillance to monitor train stations.
The testing was initiated on Wednesday by the British Transport Police (BTP) at London Bridge railway station, with future plans to cover key transportation hubs in London.
The introduction of LFR on UK railway stations was first announced in November last year, coinciding with a mass stabbing attack on a train to London that left eleven people injured. The system aims to target crime hotspots where data has shown “high harm” offenders are likely to pass through.
“The initiative follows a significant amount of research and planning, and forms part of BTP’s commitment to using innovative technology to make the railways a hostile place for individuals wanted for serious criminal offences, helping us keep the public safe,” says Transport Police Chief Superintendent Chris Casey.
The Transport Police relies on the NEC’s NeoFace M40 facial recognition algorithm, which was evaluated by the National Physical Laboratory (NPL). The force has pledged to publish a full assessment of its operation after the pilot is completed.
Similar to other police LFR deployments, the system will rely on a watchlist of offenders and automatically delete images of people who are not matched. The project includes Network Rail, the Department for Transport and the Rail Delivery Group.
London Assembly member calls for LFR ban
The expansion of LFR into railway stations comes as the UK government prepares for increased rollouts of the technology as part of its newly announced policing reforms. According to the blueprint, the Home Office will fund 40 new LFR vans to be deployed in town centers across England and Wales.
The plan, however, is sparking resistance.
On Wednesday, a member of the London Assembly for the Green Party, Zoë Garbett, called on the city’s Metropolitan Police to halt its facial recognition deployments, citing concerns over bias and a lack of primary legislation for police use of the technology.
The comments were delivered during a 10-week consultation on a legal framework for police use of the surveillance tech kicked off by the Home Office in December, according to Computer Weekly.
“It makes no sense for the home secretary to announce the expansion of live facial recognition at the same time as running a government consultation on the use of this technology,” says Garbett. “This expansion is especially concerning given that there is still no specific law authorising the use of this technology.”
In a report submitted to the London Assembly, Garbett argues that the Met Police has been plagued by a lack of transparency, including over the costs of deploying the technology.
“This rapid increase in deployment has come with no evaluation of its effectiveness or consideration of the cost of using LFR compared to other possible policing and non-policing methods,” the report notes.
Garbett also notes that the London police have been increasing the size of the watchlist , turning LFR from “precise policing” to something more akin to a “fishing trawler.” The Green Party member also argues that the LFR is used disproportionately in areas that have more people of black, Asian or mixed ethnicities than the London average.
The report notes that its findings were informed by the work of advocacy organizations Big Brother Watch and Liberty.
Big Brother Watch is currently mounting the largest legal challenge yet to the Met Police’s use of facial recognition. The case, brought by black anti-knife crime campaigner Shaun Thompson and Big Brother Watch director Silkie Carlo, was heard by the London High Court in January.
Thompson was detained by police after the Met Police’s facial recognition system produced a false match.
Discord to start requiring face scan or ID to access adult content
The online chat service, which has 200 million monthly users, will blur adult content by default.
ROC facial recognition integrated by US retail loss prevention platform

California-based retail loss prevention specialist Gatekeeper Systems has integrated ROC’s facial recognition with its FaceFirst platform through a new strategic partnership.
FaceFirst provides real-time alerts based on matches to a blacklist of offenders “enrolled for probable cause.” The software is billed as a way to reduce violence against retail staff and customers, and reduce both theft and false alerts.
The company says clients using FaceFirst have proved that more than 90 percent of enrolled individuals are repeat offenders.
The addition of ROC’s biometric algorithms gives the platform technology trusted by the U.S. Department of Defense, law enforcement agencies and Fortune 500 companies, according to the announcement. The release also highlights ROC’s consistent strong performances in NIST’s FRTE assessments of one-to-one and one-to-many facial recognition.
“Facial recognition in retail must be fast, accurate, and accountable,” says Robert Harling, CEO of Gatekeeper Systems. “By embedding ROC’s NIST-verified algorithm directly into FaceFirst, we’re giving retailers a system that performs in real time and stands up to public, operational, and legal scrutiny. It’s AI you can trust — and accuracy you can prove.”
Retail biometrics for shoplifting prevention are a growing business area in the U.S., but still a controversial one.
ROC is currently gearing up for a possible IPO on the Nasdaq.
Milwaukee police sink efforts to contract facial recognition with unsanctioned use

A meeting on whether and how Milwaukee police should use facial recognition in criminal investigations took an unexpected turn Thursday night, with revelations that the technology is already in use, complete with transparency and governance failures. By Friday morning, the Milwaukee Police Department had placed a moratorium on the use of facial recognition by its staff.
MPD revealed in a Fire and Police Commission meeting discussing its procurement options and whether the force should be allowed to use the technology that it has already been doing so in investigations, as reported by NPR affiliate WUWM. Further, it did so without a standard operating procedure, and was still doing so when MPD Chief of Staff Heather Hough asked late last night for acknowledgement that we recognized this was an issue and we disclosed it.”
Those attending the meeting had expected a draft policy to be presented for consideration ahead of any implementation of face biometrics for local law enforcement investigations.
MPD officials said they used the technology through neighboring police departments. They said it has only been used in criminal investigations, but could not support the claim with any evidence.
The disclosure prompted Commission Vice Chair Bree Spencer to suggest a moratorium was needed.
An internal MPD memo reported by ABC affiliate 12 News this morning shuttered the practice.
“Despite our belief that this is useful technology to assist in generating leads for apprehending violent criminals, we recognize that the public trust is far more valuable,” says the memo obtained by 12 News.
The twist follows a widely-misunderstood effort by MPD to contract facial recognition capabilities from Biometrica. MPD is now abandoning those efforts, according to the memo.
ICE’s facial recognition app is new, but the NEC tech behind it is well known

The revelation that the Mobile Fortify app used by ICE to identify suspected immigration procedure violators, and increasingly protestors, uses face biometrics capabilities supplied by NEC has sparked renewed interest in how well the technology works, and where else it is used.
NEC introduced its NeoFace system well over a decade ago, and it gained notoriety (in some circles at least) when it was used to identify Boston Marathon terrorist Dzhokhar Tsarnaev in 2013.
NEC’s NeoFace has been used by police in the UK for operator initiated facial recognition (OIFR), the same kind of system as Mobile Fortify since 2023. UK police also use the software for mobile public deployments of live facial recognition.
The company’s facial recognition algorithms have consistently placed among the most accurate tested in NIST’s Face Recognition Vendor Evaluation (FRTE) for identification (1:N).
In 2023 testing by the UK’s National Physical Laboratory (NPL) conducted for London’s Metropolitan Police, NeoFace had a false match rate of one in 6,000 and “no statistically significant race and gender bias” at specified thresholds with a reference database of 10,000 records. The finding about lack of bias has been disputed, however, on grounds that at lower confidence thresholds the technology shows uneven error rate between groups of people based on how dark their skin is.
The UK deployments remain controversial in their home country, The Conversation reports, and public understanding of the technology’s use is low. Only 55 percent of those surveyed in the UK trust police to use the technology responsibly, and only 10 percent say they know much about how and when it is used.
The controversy sparked in America by Mobile Fortify is partly related to the way it is being used, and partly to the processes behind that use. The facial recognition algorithm is not licensed directly by ICE, according to the Inventory, but rather through CBP’s TVS.
In the U.S., NeoFace has previously been piloted at Dulles Airport in Washington, used at sporting events and in fast-food restaurants.
NeoFace is also used by police in Canada, airport checks for flydubai crew, and for biometric passport checks in New Zealand and Kenya, amongst dozens of other implementations.






























