NHS deal with AI firm Palantir called into question after officials’ concerns revealed

Exclusive: in 2025 briefing to Wes Streeting, officials warned reputation of tech firm behind US ICE operations would hinder rollout of data system in UK

Health officials fear Palantir’s reputation will hinder the delivery of a “vital” £330m NHS contract, according to briefings seen by the Guardian, sparking fresh calls for the deal to be scrapped.

In 2023, ministers selected Palantir, a US surveillance technology company that also works for the Israeli military and Donald Trump’s ICE operation, to build an AI-enabled data platform to connect disparate health information across the NHS.

Continue reading…

CBP embeds Clearview AI into tactical targeting operations

CBP embeds Clearview AI into tactical targeting operations
U.S. Customs and Border Protection (CBP) is formally integrating Clearview AI’s facial recognition platform into its intelligence and targeting operations, according to federal procurement records and a February 2026 Statement of Work (SoW).

The contract places Clearview licenses inside the agency’s National Targeting Center (NTC) and Border Patrol intelligence units and frames the technology as enhancing “tactical targeting” and “strategic counter-network analysis.”

The deployment follows two smaller 2025 purchase orders for Border Patrol sectors in Spokane and Yuma, suggesting Clearview’s use began in the field before expanding to headquarters-level intelligence functions.

Federal procurement records show that on June 23, 2025, CBP issued two purchase orders to Clearview AI for Border Patrol sector use. One, valued at $30,000, was issued for the Spokane Sector. The second, valued at $15,000, was issued for the Yuma Sector.

The February SoW authorizes the procurement of 15 Clearview AI licenses for one year at a total contract value of $225,000. The contract makes clear that Clearview’s facial recognition capability is intended to enhance CBP’s ability to identify, vet, and analyze individuals encountered in border and national security operations.

What the document does not clarify is whether CBP has completed the required privacy determinations that normally accompany the operational deployment of a biometric search tool of this scale.

The SoW repeatedly refers to “tactical targeting,” a phrase used within CBP to describe intelligence and vetting workflows that support enforcement decisions. But the term does not appear as a discrete budget line in CBP’s annual appropriations.

Instead, funding for these functions is embedded within the broader “Targeting Operations” category under CBP’s Trade and Travel Operations account.

Congressional appropriations documents show hundreds of millions of dollars allocated annually for targeting operations, including $315 million for fiscal year 2026, up from approximately $277 million in fiscal year 2025.

That structure is significant. Congress appropriates funds for targeting operations in broad terms, while the specific technologies used within those operations are typically disclosed only in procurement records or internal budget justifications.

Clearview’s integration therefore appears not as a new appropriated initiative, but rather as a tool embedded within an already funded targeting mission.

Clearview AI advertises access to more than 60 billion publicly available images scraped from the Internet. Users upload a facial image, and the system returns potential matches drawn from its database.

The SoW explicitly defines biometric identifiers and photographic facial images as personally identifiable information (PII). It also incorporates Department of Homeland Security (DHS) safeguarding clauses governing the handling of PII and sensitive information.

Critically, the contract includes a provision stating that the requirement to complete a Privacy Threshold Analysis (PTA) is triggered by the creation, use, modification, or upgrade of a contractor IT system that will store, maintain, or use PII. The contractor is required to support completion of that PTA “as needed.”

That language places Clearview’s deployment squarely within DHS’s formal privacy compliance framework. Under DHS policy, a PTA is the initial assessment used to determine whether a new or modified system requires a full Privacy Impact Assessment (PIA) or a new or updated System of Records Notice (SORN) under the Privacy Act. The SoW does not state whether a PTA has been completed.

The Privacy Act of 1974 governs federal agency systems of records that are retrievable by personal identifier. If an agency maintains information about individuals in a system that can be retrieved by name or other unique identifier, it must publish a SORN describing the categories of individuals covered, the types of records maintained, the routine uses of those records, and the safeguards that are in place.

DHS policy requires a PTA whenever a new contractor-operated IT system stores, maintains, or uses PII on behalf of the agency. Following a PTA, the DHS Privacy Office determines whether a PIA or SORN is required.

The legal hinge is whether Clearview constitutes a contractor IT system that stores or maintains PII for DHS purposes, or whether it is treated as an external investigative tool providing leads that are later incorporated into internal DHS systems.

If CBP analysts upload images into Clearview’s platform and the contractor retains those images, search queries, or resulting data in a way that is linked to identifiable individuals, that would appear to trigger PTA review and potentially PIA or SORN obligations.

If, by contrast, Clearview is treated more like querying a commercial database without creating a DHS system of records, the agency may argue that existing privacy authorities covering systems such as the Automated Targeting System (ATS) or National Targeting Center (NTC) operations are sufficient.

The contract’s inclusion of the PTA clause suggests DHS anticipated that a privacy determination would need to be made. The SoW does not merely reference privacy compliance in passing, it embeds Clearview within DHS’s full security authorization framework.

The contractor must complete the security authorization process in accordance with DHS 4300A policy, undergo independent assessment of security and privacy controls, and obtain an authority to operate before processing sensitive information.

The document states that the contractor “shall not input, store, process, output, and/or transmit sensitive information within a contractor IT system without the Authority to Operate.”

This language reinforces that DHS is treating the Clearview deployment as a formal IT integration subject to enterprise security controls, not merely as an informal investigative subscription. If the system processes PII on behalf of DHS, the PTA would normally be completed as part of that authorization process.

The contract also requires Clearview to maintain and deliver monthly user analytics, including first and last names, official email addresses, agency affiliation, login dates, total number of logins, and total number of searches.

Those reporting requirements suggest a structured and monitored deployment, not an experimental pilot.

From a legal perspective, the reporting mechanism raises additional questions. If search logs are retained and associated with identifiable individuals or investigative cases, they could become part of DHS records systems.

Depending on how those logs are stored and retrieved, they could imply Privacy Act requirements. The contract does not clarify whether search queries or results are retained by Clearview, by CBP, or both.

CBP maintains published PIAs for systems including ATS and NTC operations. Those documents describe CBP’s collection and analysis of data related to travelers, cargo, and enforcement subjects. Whether those existing PIAs encompass the use of a commercial facial recognition system built on scraped internet imagery is an open question.

The Statement of Work acknowledges that completion of a PTA may result in the need for a PIA or SORN modification. No Clearview-specific PIA or SORN update has been publicly identified.

The absence of a publicly posted PIA does not necessarily mean that privacy review has not occurred. PTAs themselves are not typically published. It does, however, mean there is no public documentation explaining how DHS has determined the deployment fits within its existing privacy authorities.

The timeline of procurement actions indicates incremental expansion. In 2025, CBP purchased Clearview licenses for two Border Patrol sectors. In 2026, the agency expanded use to headquarters intelligence and NTC.

While the dollar values are modest compared to CBP’s overall targeting budget, the integration of a facial recognition tool linked to a massive, scraped image repository represents a meaningful operational enhancement.

Clearview’s database extends beyond government collected biometrics. It is built from publicly available images aggregated at scale, including images posted on social media and other websites. The legal and ethical controversies surrounding that collection model have been widely documented. By embedding Clearview within its targeting workflows, CBP is leveraging that privately assembled biometric repository for federal intelligence purposes.

The legal issue at stake is not simply whether Clearview may lawfully be used in criminal investigations. Law enforcement agencies across the country use facial recognition tools under various legal frameworks.

The issue here is procedural transparency and compliance within the federal privacy architecture. The contract anticipates a Privacy Threshold Analysis determination. It integrates the system into DHS security authorization processes and defines biometric images as PII.

What has not been publicly clarified is whether the DHS Privacy Office has completed the PTA; whether that PTA determined a new or updated PIA is required; and whether the use of Clearview is covered under existing SORNs for ATS or NTC systems.

Until DHS answers those questions, Clearview’s deployment inside CBP’s tactical targeting environment occupies a compliance gray zone.

The contract shows that the agency anticipated privacy review. The absence of publicly available documentation explaining the outcome of that review leaves observers to infer how the deployment was legally rationalized.

As CBP continues expanding analytic capabilities under its Targeting Operations umbrella, the underlying privacy architecture becomes as important as the technology itself.

Ring Super Bowl ad sparks backlash over AI camera surveillance

Ring Super Bowl ad sparks backlash over AI camera surveillance
A Super Bowl commercial for Amazon’s Ring doorbell camera triggered swift backlash, with critics arguing that the company used an emotional story about a lost dog to promote a neighborhood scale camera network while minimizing the broader privacy and civil liberties implications of searchable AI video.

The short commercial which was intended to highlight lost pet reunions instead ignited a broader national conversation about how much machine vision society is willing to accept in exchange for safety and convenience.

The ad highlighted Ring’s Search Party feature, which is designed to help reunite owners with lost dogs by using AI to scan video from nearby participating cameras for potential matches.

In the commercial, a missing pet is located through coordinated alerts across a network of doorbell and outdoor cameras. But what Ring framed as a story of community assistance was viewed by many privacy advocates as a normalization of ambient surveillance infrastructure.

Civil liberties groups said the ad blurred the line between voluntary home security and de facto neighborhood monitoring. They warned that packaging AI powered video search as a feel-good service during one of the most watched television events of the year risks desensitizing viewers to how such systems function at scale.

The Electronic Frontier Foundation (EFF) issued one of the strongest responses. “Amazon Ring’s Super Bowl ad offered a vision of our streets that should leave every person unsettled about the company’s goals for disintegrating our privacy in public,” the organization said.

EFF added that the commercial, “disguised as a heartfelt effort to reunite the lost dogs of the country with their innocent owners, previewed future surveillance of our streets, a world where biometric identification could be unleashed from consumer devices to identify, track, and locate anything, human, pet, and otherwise.”

Sen. Ed Markey, a longtime critic of consumer surveillance technologies, also condemned the ad. He said Ring is “turning your neighborhood into a surveillance network,” and warned that tools marketed as community safety features can “create serious risks for privacy and civil liberties.”

In prior oversight letters examining Ring’s practices, Markey has argued that “Americans should not have to trade their privacy for security.”

Search Party allows users to create a lost dog post within the Ring app and activate AI scanning among participating nearby cameras. The system analyzes footage for potential matches based on characteristics such as size, breed, and markings.

If a possible match is detected, the camera owner receives a notification and can choose whether to share the clip. Ring has described the feature as voluntary and time limited, emphasizing that no footage is automatically sent to the pet owner without user approval.

Company executives have pointed to early reunions as evidence that the feature works and say it simply streamlines what neighbors already do when they share lost pet posts through social media and community forums.

Supporters argue that participation is optional and that users retain control over their own footage.

Critics counter that the backlash is not about dogs but about infrastructure. They argue that a distributed network of privately owned cameras equipped with AI search capabilities normalizes constant monitoring.

Even if the feature is limited to pets, they say the technical foundation resembles systems used for object recognition and biometric identification. In that context, the difference between identifying a dog and identifying a person becomes a question of software configuration rather than hardware capability.

Another flashpoint involves default participation. During rollout, the feature was enabled by default on certain compatible devices, requiring users to opt out if they did not want their cameras to participate.

Privacy advocates argue that default settings shape real world surveillance reach because many users do not regularly review device configurations. They contend that opt out architecture can quietly scale a searchable network across entire neighborhoods.

The Super Bowl ad also aired amid broader debate over Ring’s expanding AI features, including tools that allow users to identify familiar faces and receive personalized alerts.

Last October, Markey sent a letter to Amazon CEO Andrew Jassy in which warned that the company’s new Familiar Faces feature represents a dangerous step toward normalizing mass surveillance in American neighborhoods.

Markey described the rollout as “a dramatic expansion of surveillance technology” that poses “vast new privacy and civil liberties risks,” arguing that ordinary people should not have to fear being tracked or recorded when walking past a home equipped with a Ring camera.

While the Search Party feature is not marketed as a facial recognition system, critics argue that the same classification infrastructure underpins both types of capabilities and makes future expansion easier.

Concerns about law enforcement access resurfaced as well. Ring previously ended a program that facilitated direct police requests for footage through its app, but it continues to operate community request tools that allow public safety agencies to ask nearby users to voluntarily share video tied to investigations.

Those requests are integrated into platforms used by public safety technology vendors, including Flock Safety, which operates automated license plate reader networks.

Privacy advocates argue that once footage is shared with local agencies, it can be retained, combined with other datasets, or disseminated further in ways that are difficult for individual users to track.

Even if Ring does not provide direct backend access to federal agencies, they say downstream sharing arrangements create uncertainty about how footage ultimately circulates.

Some social media speculation during the controversy claimed that federal immigration authorities could directly access Ring cameras. Ring has denied giving Immigration and Customs Enforcement direct access to its video feeds or systems.

Civil liberties groups maintain that the broader concern is structural rather than agency specific. Once a searchable neighborhood camera network exists, they argue, questions of governance and oversight extend beyond any single policy assurance.

At the center of the debate is a larger cultural tension. The Super Bowl ad presented AI driven monitoring as reassuring and community minded. Critics say that framing risks redefining constant observation as a civic virtue. Supporters say the feature represents practical innovation that helps solve everyday problems.

The episode illustrates how rapidly consumer camera technology is converging with capabilities once associated primarily with state surveillance. As AI makes video archives searchable and classifiable at scale, the boundary between smart home convenience and neighborhood intelligence infrastructure continues to narrow.

Here are the brands bringing ads to ChatGPT

OpenAI officially launched its advertising pilot in ChatGPT, leaving us with a better idea of the kinds of products we might see stuffed beneath our conversations with the AI chatbot. Several companies have announced plans to show ads inside ChatGPT – placements that will reportedly cost them a pretty penny – ranging from major retailers […]