‘Big Tech’ fears and confusion dominate dialogue over UK digital ID scheme

‘Big Tech’ fears and confusion dominate dialogue over UK digital ID scheme
The UK government’s digital ID consultation has begun, its detailed plan for the process finally revealed, but all that is clear so far is the extent and breadth of confusion around what is being proposed.

Chief Secretary to the Prime Minister Darren Jones announced the consultation plan on Tuesday, just as The Cabinet Office launched the process. The consultation includes a public survey and stakeholder feedback.

The government plans to ask 100 randomly selected Britains for their input as part of a “People’s Panel for Digital ID.”

“This consultation is going above and beyond to bring people in to all the big debates, and the knotty trade-offs too,” Jones said.

The government is likely to find that people don’t understand what the plan is, or how it can help individuals, according to a December report from Hippo Digital offering “Qualitative insights concerning the UK Digital Identity scheme.”

In-depth interviews revealed that participants admitted low understanding of the scheme, but believed it has a much wider scope than what has been communicated so far. The misconceptions were both rife and familiar, from the total replacement of physical IDs to the creation of a massive new centralized database. One participant admitted their opinion is largely informed by the failed attempt to introduce a Britcard, saying “as far as what’s changed compared to last time I don’t know.”

The original stated purpose of the digital ID, to disincentivize illegal immigration, “was widely dismissed as ‘political pandering’” by the interviewees.

Almost all interviewees have regularly used private sector digital wallets, but would still prefer to use a government-issued wallet, as their mistrust for the government does not match their mistrust for “big tech.”

The big take-away may therefore be for the private-sector providers, as much as government. If UK digital wallet providers can differentiate themselves from “big tech,” a term that generally applies to companies like Alphabet, Amazon, Apple, Meta and Microsoft, not those on the list of providers certified under the DIATF (now “DVS”).

Clear on what?

The Labour government’s plan includes integrating health and education data and services with the digital ID, but the ministers responsible for those departments, Bridget Phillipson and Wes Streeting, respectively, have balked at their departments participating. Identity verification for special educational needs funding and NHS services would have to be handled separately from the national digital ID.

For now, the Health Department is continuing to focus on the use of the NHS app going forward, The Times reports.

The Driver and Vehicle Standards Agency (DVSA) business plan for the year, in contrast, includes exploring “digital pass certificates” and integrating with the GOV.UK Wallet to make license issuance more efficient, in collaboration with the Driver and Vehicle Licensing Agency (DVLA).

With the plan for the consultation coming out the same day as the consultation itself, the government should be prepared to hear from stakeholders, expert or otherwise, that the UK would rather develop, communicate and approve plans prior to implementing them.

The consultation closes May 5.

FTC’s AI Policy Statement No Substitute for Rulemaking

Today, TechFreedom released an open letter calling on the FTC to take public comments before finalizing its policy statement on deceptive AI and federal preemption. We remind the Commission that President Trump previously ordered agencies to take public input when […]

NIST concept paper explores identity and authorization controls for AI agents

NIST concept paper explores identity and authorization controls for AI agents
A draft concept paper released by the National Institute of Standards and Technology (NIST) asks industry and government stakeholders how organizations should identify, authenticate and control software and artificial intelligence agents that can access enterprise systems and take actions with limited human supervision.

Published by NIST’s National Cybersecurity Center of Excellence (NCCoE), the paper, Accelerating the Adoption of Software and Artificial Intelligence Agent Identity and Authorization, outlines a proposed project aimed at adapting modern identity and access management frameworks to a new class of digital actors that increasingly operate across enterprise networks.

The paper was written by Ryan Galluzzo, who leads NIST’s digital identity program, Bill Fisher, Harold Booth and Joshua Roberts.

Released as an initial public draft, the paper reflects growing recognition that agentic AI systems capable of gathering information, interacting with tools and executing tasks on behalf of users may require identity governance like that used for human users and traditional software workloads.

The effort reflects growing concern that the rapid emergence of “agentic” AI systems – software capable of making decisions and executing tasks with limited human supervision – is outpacing the security and governance models that traditionally control automated processes.

For more than a decade, organizations have relied on code-based automation to manage cloud workloads, APIs and enterprise workflows. But AI agents represent a different category of software actor.

Unlike conventional automation scripts, these systems can dynamically gather information from multiple sources, reason over that data and take actions that may affect multiple downstream systems. As their capabilities expand, so does the potential impact of mistakes, misuse or compromise.

The NIST concept paper argues that existing identity frameworks must evolve to address this shift. Systems that can autonomously access tools, query databases and execute operations on behalf of users require clear mechanisms for identification, authentication and authorization.

Without those controls, AI agents could effectively become privileged actors operating across enterprise networks with unclear accountability.

The NCCoE proposal centers on a straightforward but consequential premise: AI agents should be treated as identifiable entities within enterprise identity systems rather than as anonymous automation running under shared credentials.

A future NCCoE demonstration project would explore how existing identity standards and best practices can be applied to these systems so organizations can securely deploy agentic AI technologies while managing risk.

Among the questions the agency is asking stakeholders to address are how AI agents should be identified within enterprise architectures, what metadata should define their identities and whether those identities should be persistent or dynamically tied to specific tasks.

The concept paper also raises technical issues related to authentication, including how credentials for AI agents should be issued, updated and revoked. As with human users, compromised credentials or poorly managed authentication mechanisms could allow malicious actors to hijack agent capabilities.

Authorization presents another set of challenges. AI agents may need access to multiple data sources and enterprise tools to complete tasks, yet their behavior may evolve as they interact with systems and gather new information.

That dynamic nature complicates the principle of least privilege, a cornerstone of cybersecurity that limits access rights to only what is necessary for a specific task.

The paper asks whether authorization policies for AI agents should be able to adapt in real time as an agent’s operational context changes.

The paper also highlights several emerging security risks tied to the deployment of AI agents. One of them is prompt injection, a technique in which adversaries manipulate the inputs provided to an AI system to influence its behavior.

If an AI agent can access enterprise resources or trigger operational actions, a successful prompt injection attack could cause the system to retrieve sensitive data or execute unintended commands.

Another concern is accountability. Autonomous agents may carry out actions on behalf of human users or organizations, raising questions about how responsibility should be assigned if those actions cause harm.

To address this, the NCCoE project would examine mechanisms for logging and auditing agent activity. Such systems could ensure that actions taken by an AI agent can be traced back to the nonhuman identity that performed them and ultimately to the human authority responsible for delegating those permissions.

Rather than developing entirely new frameworks, the NIST initiative focuses on adapting existing identity and access management standards to the emerging agent ecosystem.

The concept paper identifies several technologies that could play a role in managing agent identities and permissions. These include OAuth and OpenID Connect, widely used authentication and authorization protocols, along with identity lifecycle management tools such as the System for Cross-domain Identity Management.

The proposal also references frameworks such as the Secure Production Identity Framework for Everyone and its implementation environment SPIRE, which provide cryptographic identities for software workloads operating in distributed systems.

Policy enforcement could draw on attribute-based access control systems such as Next Generation Access Control, which enables fine-grained authorization decisions across complex environments.

These tools would be implemented alongside existing NIST cybersecurity guidance, including the agency’s Zero Trust Architecture model and digital identity guidelines.

NIST’s potential project would focus primarily on enterprise deployments where organizations maintain visibility and control over the agents operating in their systems.

Several use cases highlighted in the concept paper illustrate how AI agents could be integrated into everyday operations.

One involves productivity-focused agents that assist employees with tasks such as managing schedules, drafting policy documents or generating recommendations.

Another use case focuses on security-oriented agents that analyze cybersecurity data and recommend or execute defensive actions.

A third potential use case involves software development and deployment pipelines, where AI agents may automate elements of coding, testing and release management.

In each of these scenarios, agents require access to sensitive data and enterprise systems, making robust identity and authorization controls essential.

If the project moves forward, the NCCoE intends to develop a practical implementation guide demonstrating how organizations can deploy AI agents while maintaining secure identity governance.

Such guidance would be built using commercially available technologies in NCCoE laboratories and would document real-world implementation approaches along with lessons learned.

The goal is to help organizations adopt agentic AI capabilities without sacrificing security or accountability.

The initiative forms part of a broader push by NIST to address the governance challenges posed by autonomous AI systems. Through research, standards development and collaborative projects with industry, the agency is seeking to establish the technical foundations necessary for what many expect to be the next major phase of AI deployment.

Comments on the paper are due on April 2.

UK startup raises $15M to build Europe’s sovereign alternative to biometric surveillance

UK startup raises M to build Europe’s sovereign alternative to biometric surveillance
A British start-up has raised millions for its biometric-alternative surveillance technology.

Augur, a resilience technology startup, has raised $15 million to update what it believes are outdated cameras and sensors guarding Europe’s critical infrastructure. The London-based firm launched in 2024 and counts 30 staff, according to reporting by The Next Web.

Augur says its AI platform can turn fragmented sensor networks into a real‑time intelligence layer for national security and public safety. The seed round was led by Plural, with participation from First Kind, Flix, Tiny VC, and SNR, and will support expansion across Europe.

Its system plugs into existing cameras and sensors, using machine‑learning models to detect unusual behaviour, link activity across sites, and reconstruct incidents in seconds. The company stresses that it does not use facial recognition.

Instead, it tracks anonymized movement and behavioural patterns, a privacy‑by‑design approach it argues sets it apart from generic video analytics and “smart city” platforms that rely on biometric profiling or basic detection.

Augur positions itself as a sovereign, mission‑focused alternative to global surveillance vendors, with compliance baked in for GDPR and the forthcoming EU AI Act. Augur was founded by Harry Mead (previously behind the safety app Path) alongside former Palantir employees Imran Lone and Stefan Kopieczek.

With the new funding Augur plans to grow its London team, accelerate research and development on AI models for high-risk environments, and integrate with more sensors and systems found in European infrastructure. It aims to scale early pilots into national‑level deployments with transport hubs, energy operators and major venues.

Alongside that, the start-up is working with policymakers on evolving AI, privacy, and security rules. In the long run, Augur wants to become the default resilience layer for operators responsible for protecting large populations across the UK, Europe, and “allied countries.”

In a blog post highlighting the dangers of sabotage, terrorism and hybrid war tactics, Plural set out the reasons for its investment in Augur. The London-based investment firm is focused on European technology that can drive resilience and sovereignty for the continent.

“The complexity of the threat picture facing our domestic security apparatus has changed dramatically, and our current capabilities to protect ourselves are both highly outdated and highly dependent on non sovereign technology,” Plural’s post says.

“Augur is building the modern operating system for security and public safety. Its AI analytics platform unifies the patchwork of existing surveillance systems and hardware to enable faster threat prediction, detection and response.”