UK police begins live facial recognition trials at railway stations

UK police begins live facial recognition trials at railway stations
The UK police have kicked off a six-month pilot using live facial recognition (LFR) surveillance to monitor train stations.

The testing was initiated on Wednesday by the British Transport Police (BTP) at London Bridge railway station, with future plans to cover key transportation hubs in London.

The introduction of LFR on UK railway stations was first announced in November last year, coinciding with a mass stabbing attack on a train to London that left eleven people injured. The system aims to target crime hotspots where data has shown “high harm” offenders are likely to pass through.

“The initiative follows a significant amount of research and planning, and forms part of BTP’s commitment to using innovative technology to make the railways a hostile place for individuals wanted for serious criminal offences, helping us keep the public safe,” says Transport Police Chief Superintendent Chris Casey.

The Transport Police relies on the NEC’s NeoFace M40 facial recognition algorithm, which was evaluated by the National Physical Laboratory (NPL). The force has pledged to publish a full assessment of its operation after the pilot is completed.

Similar to other police LFR deployments, the system will rely on a watchlist of offenders and automatically delete images of people who are not matched. The project includes Network Rail, the Department for Transport and the Rail Delivery Group.

London Assembly member calls for LFR ban

The expansion of LFR into railway stations comes as the UK government prepares for increased rollouts of the technology as part of its newly announced policing reforms. According to the blueprint, the Home Office will fund 40 new LFR vans to be deployed in town centers across England and Wales.

The plan, however, is sparking resistance.

On Wednesday, a member of the London Assembly for the Green Party, Zoë Garbett, called on the city’s Metropolitan Police to halt its facial recognition deployments, citing concerns over bias and a lack of primary legislation for police use of the technology.

The comments were delivered during a 10-week consultation on a legal framework for police use of the surveillance tech kicked off by the Home Office in December, according to Computer Weekly.

“It makes no sense for the home secretary to announce the expansion of live facial recognition at the same time as running a government consultation on the use of this technology,” says Garbett. “This expansion is especially concerning given that there is still no specific law authorising the use of this technology.”

In a report submitted to the London Assembly, Garbett argues that the Met Police has been plagued by a lack of transparency, including over the costs of deploying the technology.

“This rapid increase in deployment has come with no evaluation of its effectiveness or consideration of the cost of using LFR compared to other possible policing and non-policing methods,” the report notes.

Garbett also notes that the London police have been increasing the size of the watchlist , turning LFR from “precise policing” to something more akin to a “fishing trawler.” The Green Party member also argues that the LFR is used disproportionately in areas that have more people of black, Asian or mixed ethnicities than the London average.

The report notes that its findings were informed by the work of advocacy organizations Big Brother Watch and Liberty.

Big Brother Watch is currently mounting the largest legal challenge yet to the Met Police’s use of facial recognition. The case, brought by black anti-knife crime campaigner Shaun Thompson and Big Brother Watch director Silkie Carlo, was heard by the London High Court in January.

Thompson was detained by police after the Met Police’s facial recognition system produced a false match.

CBP embeds Clearview AI into tactical targeting operations

CBP embeds Clearview AI into tactical targeting operations
U.S. Customs and Border Protection (CBP) is formally integrating Clearview AI’s facial recognition platform into its intelligence and targeting operations, according to federal procurement records and a February 2026 Statement of Work (SoW).

The contract places Clearview licenses inside the agency’s National Targeting Center (NTC) and Border Patrol intelligence units and frames the technology as enhancing “tactical targeting” and “strategic counter-network analysis.”

The deployment follows two smaller 2025 purchase orders for Border Patrol sectors in Spokane and Yuma, suggesting Clearview’s use began in the field before expanding to headquarters-level intelligence functions.

Federal procurement records show that on June 23, 2025, CBP issued two purchase orders to Clearview AI for Border Patrol sector use. One, valued at $30,000, was issued for the Spokane Sector. The second, valued at $15,000, was issued for the Yuma Sector.

The February SoW authorizes the procurement of 15 Clearview AI licenses for one year at a total contract value of $225,000. The contract makes clear that Clearview’s facial recognition capability is intended to enhance CBP’s ability to identify, vet, and analyze individuals encountered in border and national security operations.

What the document does not clarify is whether CBP has completed the required privacy determinations that normally accompany the operational deployment of a biometric search tool of this scale.

The SoW repeatedly refers to “tactical targeting,” a phrase used within CBP to describe intelligence and vetting workflows that support enforcement decisions. But the term does not appear as a discrete budget line in CBP’s annual appropriations.

Instead, funding for these functions is embedded within the broader “Targeting Operations” category under CBP’s Trade and Travel Operations account.

Congressional appropriations documents show hundreds of millions of dollars allocated annually for targeting operations, including $315 million for fiscal year 2026, up from approximately $277 million in fiscal year 2025.

That structure is significant. Congress appropriates funds for targeting operations in broad terms, while the specific technologies used within those operations are typically disclosed only in procurement records or internal budget justifications.

Clearview’s integration therefore appears not as a new appropriated initiative, but rather as a tool embedded within an already funded targeting mission.

Clearview AI advertises access to more than 60 billion publicly available images scraped from the Internet. Users upload a facial image, and the system returns potential matches drawn from its database.

The SoW explicitly defines biometric identifiers and photographic facial images as personally identifiable information (PII). It also incorporates Department of Homeland Security (DHS) safeguarding clauses governing the handling of PII and sensitive information.

Critically, the contract includes a provision stating that the requirement to complete a Privacy Threshold Analysis (PTA) is triggered by the creation, use, modification, or upgrade of a contractor IT system that will store, maintain, or use PII. The contractor is required to support completion of that PTA “as needed.”

That language places Clearview’s deployment squarely within DHS’s formal privacy compliance framework. Under DHS policy, a PTA is the initial assessment used to determine whether a new or modified system requires a full Privacy Impact Assessment (PIA) or a new or updated System of Records Notice (SORN) under the Privacy Act. The SoW does not state whether a PTA has been completed.

The Privacy Act of 1974 governs federal agency systems of records that are retrievable by personal identifier. If an agency maintains information about individuals in a system that can be retrieved by name or other unique identifier, it must publish a SORN describing the categories of individuals covered, the types of records maintained, the routine uses of those records, and the safeguards that are in place.

DHS policy requires a PTA whenever a new contractor-operated IT system stores, maintains, or uses PII on behalf of the agency. Following a PTA, the DHS Privacy Office determines whether a PIA or SORN is required.

The legal hinge is whether Clearview constitutes a contractor IT system that stores or maintains PII for DHS purposes, or whether it is treated as an external investigative tool providing leads that are later incorporated into internal DHS systems.

If CBP analysts upload images into Clearview’s platform and the contractor retains those images, search queries, or resulting data in a way that is linked to identifiable individuals, that would appear to trigger PTA review and potentially PIA or SORN obligations.

If, by contrast, Clearview is treated more like querying a commercial database without creating a DHS system of records, the agency may argue that existing privacy authorities covering systems such as the Automated Targeting System (ATS) or National Targeting Center (NTC) operations are sufficient.

The contract’s inclusion of the PTA clause suggests DHS anticipated that a privacy determination would need to be made. The SoW does not merely reference privacy compliance in passing, it embeds Clearview within DHS’s full security authorization framework.

The contractor must complete the security authorization process in accordance with DHS 4300A policy, undergo independent assessment of security and privacy controls, and obtain an authority to operate before processing sensitive information.

The document states that the contractor “shall not input, store, process, output, and/or transmit sensitive information within a contractor IT system without the Authority to Operate.”

This language reinforces that DHS is treating the Clearview deployment as a formal IT integration subject to enterprise security controls, not merely as an informal investigative subscription. If the system processes PII on behalf of DHS, the PTA would normally be completed as part of that authorization process.

The contract also requires Clearview to maintain and deliver monthly user analytics, including first and last names, official email addresses, agency affiliation, login dates, total number of logins, and total number of searches.

Those reporting requirements suggest a structured and monitored deployment, not an experimental pilot.

From a legal perspective, the reporting mechanism raises additional questions. If search logs are retained and associated with identifiable individuals or investigative cases, they could become part of DHS records systems.

Depending on how those logs are stored and retrieved, they could imply Privacy Act requirements. The contract does not clarify whether search queries or results are retained by Clearview, by CBP, or both.

CBP maintains published PIAs for systems including ATS and NTC operations. Those documents describe CBP’s collection and analysis of data related to travelers, cargo, and enforcement subjects. Whether those existing PIAs encompass the use of a commercial facial recognition system built on scraped internet imagery is an open question.

The Statement of Work acknowledges that completion of a PTA may result in the need for a PIA or SORN modification. No Clearview-specific PIA or SORN update has been publicly identified.

The absence of a publicly posted PIA does not necessarily mean that privacy review has not occurred. PTAs themselves are not typically published. It does, however, mean there is no public documentation explaining how DHS has determined the deployment fits within its existing privacy authorities.

The timeline of procurement actions indicates incremental expansion. In 2025, CBP purchased Clearview licenses for two Border Patrol sectors. In 2026, the agency expanded use to headquarters intelligence and NTC.

While the dollar values are modest compared to CBP’s overall targeting budget, the integration of a facial recognition tool linked to a massive, scraped image repository represents a meaningful operational enhancement.

Clearview’s database extends beyond government collected biometrics. It is built from publicly available images aggregated at scale, including images posted on social media and other websites. The legal and ethical controversies surrounding that collection model have been widely documented. By embedding Clearview within its targeting workflows, CBP is leveraging that privately assembled biometric repository for federal intelligence purposes.

The legal issue at stake is not simply whether Clearview may lawfully be used in criminal investigations. Law enforcement agencies across the country use facial recognition tools under various legal frameworks.

The issue here is procedural transparency and compliance within the federal privacy architecture. The contract anticipates a Privacy Threshold Analysis determination. It integrates the system into DHS security authorization processes and defines biometric images as PII.

What has not been publicly clarified is whether the DHS Privacy Office has completed the PTA; whether that PTA determined a new or updated PIA is required; and whether the use of Clearview is covered under existing SORNs for ATS or NTC systems.

Until DHS answers those questions, Clearview’s deployment inside CBP’s tactical targeting environment occupies a compliance gray zone.

The contract shows that the agency anticipated privacy review. The absence of publicly available documentation explaining the outcome of that review leaves observers to infer how the deployment was legally rationalized.

As CBP continues expanding analytic capabilities under its Targeting Operations umbrella, the underlying privacy architecture becomes as important as the technology itself.

Indicio enables AI agents to verify Digital Travel Credentials

Indicio enables AI agents to verify Digital Travel Credentials
AI agents are poised to take the hassle out of vacation planning. Powered by large language models (LLMs), the software assistants promise to plan trips from simple text requests, book transportation and hotels, and coordinate itineraries.

​To make that happen, AI travel agents will need a safe, streamlined way to authenticate a traveler’s identity. Decentralized identity company Indicio has presented a new software product this week that allows travelers to prove their identity to AI agents, chatbots and copilots using Digital Travel Credentials (DTC) stored in a digital wallet.

“For the first time, an AI agent can use a digital passport to identify a traveler and automatically trust their data,” says Heather Dahl, CEO of Indicio.

The capability is provided by the firm’s ProvenAI for Digital Travel platform, which equips AI agents with structured, verified information about the traveler through the Digital Travel Credential (DTC), which is based on authenticated biometrics and validated documents and follows the International Civil Aviation Organization’s (ICAO) specifications.

Indicio has been testing DTCs and cross-border use cases for the European Digital Identity (EUDI) Wallet alongside its investor SITA.

Indicio launched the ProvenAI platform last year. Aside from travel, the U.S.-based company believes that the product will find use in finance and education, whether it’s processing loan applications or assisting with student services.

The software relies on Verifiable Credentials (VCs) to provide the necessary trust for agentic AI to access and use personal data for complex problem-solving.  One of its most important benefits is that it ensures users are engaging with the correct application rather than a malicious bot.

Another is leveraging Verifiable Credentials to give users explicit consent and control over how their data is shared. In the case of ProvenAI for Digital Travel, all data sharing is by consent across secure traveler-to-agent channels.

“It allows travelers to consent to share their personal data, it seamlessly feeds AI agents and chatbots with the verified data needed for effective task performance, and it turns chat into a truly frictionless operational resource that’s also maximally compliant with data protection law,” adds Dahl.

Once the traveler presents the credential to the agent, the AI travel agent cryptographically verifies that the information in the credential is authentic. After the verification is complete, the agent can help book rental cars and airlines, find the fastest route to a destination, book a hotel or an excursion, and change reservations, the firm explains in a blog post.

Travel is the first step,” says Dahl. “The bigger picture is that Proven AI for Digital Travel shows how decentralized identity enables you to implement AI for customer interaction. By combining authenticated documents, biometrics, cryptography, and decentralized governance in Verifiable Credentials, human and non-human identity are able to  interact with each other and scale in an effective, efficient, and above all, trustable way.”