NIST nominee pressed on AI standards, facial recognition oversight

NIST nominee pressed on AI standards, facial recognition oversight
The Senate Committee on Commerce, Science and Transportation on Thursday considered the nomination of Arvind Raman to serve as Under Secretary of Commerce for Standards and Technology and Director of the National Institute of Standards and Technology (NIST), a position that places him at the center of the federal government’s efforts to shape technical standards for biometrics, AI, and other emerging technologies.

While Raman’s prepared testimony stayed largely at the level of innovation policy and industrial competitiveness, the hearing revealed that senators are paying close attention to how NIST’s work intersects with one of the most contentious areas in modern technology policy.

That includes the federal government’s role in evaluating facial recognition systems and setting standards for how AI technologies are tested and deployed.

Raman, currently the John A. Edwardson Dean of Engineering at Purdue University, presented himself as a technologist and long-time collaborator with NIST who understands the agency’s role in measurement science and industrial standards.

In his opening statement he emphasized NIST’s historical contribution to American innovation, highlighting its work in areas ranging from quantum science to cybersecurity guidance and AI metrology.

“If confirmed, I am excited to help NIST deliver on the President’s AI action plan and help maximize innovation along the entire American AI tech stack, in semiconductors, in our quantum industrial base, in biotechnology, and in advanced manufacturing,” Raman said in his written remarks.

Raman’s prepared statement, however, made no direct mention of facial recognition technology, biometric testing programs, or the controversial debate over algorithmic bias in surveillance systems.

Instead, it focused on the broader role of standards development and the importance of helping American technologies scale globally.

NIST, founded in 1901, has long served as the federal government’s central laboratory for measurement science and technical standards. The agency does not regulate technology directly, but its testing programs and technical benchmarks often shape procurement decisions across government and industry.

Nowhere is that influence more visible than in the field of facial recognition.

For more than a decade NIST has operated the world’s most influential benchmarking programs for facial recognition systems. These programs measure the accuracy and performance of algorithms used by government agencies, law enforcement bodies and private companies.

Today those efforts are organized primarily under two initiatives. The Face Recognition Technology Evaluation measures how accurately systems can match one image of a person’s face to another.

The Face Analysis Technology Evaluation examines a broader set of automated facial analysis capabilities, including demographic estimation and other analytic functions applied to images.

These evaluations have become a global reference point for biometric developers and government buyers. Vendors routinely submit their algorithms to NIST testing to demonstrate accuracy claims, while agencies across the federal government rely on the results when evaluating systems for investigative or security uses.

Although Raman did not raise those programs in his opening statement, they surfaced during questioning from members of the committee. Senator Ed Markey of Massachusetts pressed Raman about NIST’s ongoing work evaluating facial recognition systems and asked whether he would support the continuation of those testing programs.

“Mr. Raman, while these tools are still being used across the government, will you commit that NIST will continue its testing program and maintain full public access to NIST facial recognition test results?” Markey asked.

Raman’s response was measured and technical. Rather than taking a position on the policy debates surrounding facial recognition, he emphasized NIST’s role as an impartial standards body whose responsibility is to evaluate technologies through rigorous measurement science.

“What I will say is, that, as I said, I am fully committed to the AI action plan that the president has laid out …,” Raman began to answer when Markey cut him off to bluntly ask whether he “will you continue full public access to NIST’s facial recognition test results, yes or no?”

“Senator, I am not aware of exactly what the status of that particular tool is. But rest assured that if confirmed I will really lean in to making sure that NIST continues its leadership in the metrology of AI systems,” Raman said.

That framing is consistent with how NIST has historically approached controversial technologies. The agency’s evaluations do not determine whether a technology should be used, but instead provide performance data that policymakers, regulators and agencies can use when making decisions.

Markey said he was “disappointed in [Raman’s] answer,” saying, “I’m not getting a clear answer on that issue, and at the same time the American people are actually experiencing the consequences of this technology … and I’m going to continue to sound the alarm on this, cause I just think we cannot allow policies that fundamentally turn this into an Orwellian nation.”

Markey has been one of the most vocal critics in Congress of facial recognition technology, arguing that inaccuracies and demographic bias could lead to wrongful identification and civil liberties violations. His line of questioning reflected growing congressional scrutiny of how federal agencies test and deploy biometric surveillance systems.

The exchange highlighted the broader context in which Raman’s nomination is unfolding. AI standards are rapidly becoming a geopolitical battleground, with governments and companies competing to influence the technical rules that shape the global digital economy.

During the hearing Raman repeatedly emphasized the importance of the U.S. maintaining leadership in international standards bodies. He argued that when American institutions help shape those standards, the resulting frameworks are more likely to reflect democratic values and market oriented innovation.

“If the United States leads in global technology standards, then those standards will reflect American values,” Raman said.

That focus on standards leadership was echoed by members of the committee, though often from different policy perspectives. Chairman Ted Cruz of Texas argued that NIST should concentrate on its traditional role developing voluntary measurement standards rather than drifting into regulatory territory.

Cruz criticized the Biden administration’s AI Risk Management Framework and suggested the agency should return to a more narrowly technical mandate.

Ranking Member Maria Cantwell of Washington framed the issue differently. She highlighted bipartisan legislation aimed at strengthening NIST’s role in AI research and standards development, including proposals to expand the agency’s work testing and evaluating artificial intelligence systems.

Despite those differing emphases, both sides of the committee appeared to agree on one point. NIST’s work setting technical standards is becoming increasingly important as AI systems spread across government, industry and everyday life.

That includes technologies that have already sparked intense public debate. Facial recognition systems are now used in law enforcement investigations, border security screening and identity verification services.

At the same time, researchers and civil liberties groups have raised concerns about potential bias, misidentification and large scale surveillance.

Because NIST’s testing programs serve as the benchmark for evaluating those systems, the agency sits at the center of that debate even though it does not regulate their use.

The fact that Raman’s prepared testimony avoided direct discussion of facial recognition may reflect the delicate position the agency occupies. As NIST director he would oversee the programs that measure how these technologies perform, but the ultimate policy decisions about whether and how they should be deployed would remain with lawmakers and regulators.

Still, the hearing underscored how closely Congress is watching NIST’s role in shaping the technical foundations of AI governance.

If confirmed, Raman would inherit responsibility for an agency whose work now extends far beyond traditional industrial measurements. From cybersecurity frameworks to AI risk guidance and biometric algorithm testing, NIST increasingly defines the technical standards that underpin the modern digital state.

And as the Senate hearing made clear, the decisions made inside NIST’s laboratories about how technologies are measured and evaluated can ripple outward into some of the most consequential debates about privacy, surveillance and the future of AI.

Texas GOP candidate Brandon Herrera discussed owning a copy of Mein Kampf on podcast

Brandon Herrera, the presumptive Republican nominee in Texas’ 23rd Congressional District, spoke on a podcast in 2024 —after his first run for the House — about owning a copy of Mein Kampf, Nazi leader Adolf Hitler’s manifesto, earning him a fresh rebuke from the Republican Jewish Coalition. The GOP candidate and social media influencer, whose… Read More

California’s OS-based age verification law challenges open-source community

California’s OS-based age verification law challenges open-source community
California’s new online safety bill, AB 1043 (the Digital Age Assurance Act), adopts a declared age model for operating systems. Under the law, which is set to take effect on January 1, 2027, when a user sets up a new device, the operating system is required to ask for their age or date of birth. This declared age will be used to curate what’s available on the app store, and can be shared with developers on request to ensure age-appropriate experiences.

An article in PC Gamer points out that this “sounds incompatible with many of today’s open source software, including Linux.” The open source community is wrestling with the problem of how to comply with the laws while also not violating core privacy principles.

The piece muses on technical solutions, quoting Jef Spaleta, project leader for popular Linux distribution, The Fedora Project, who says “this might be as simple as extending how we currently map uid to usernames and group membership and having a new file in /etc/ that keeps up with age.”

Or, “it might be as simple as that and we extend the administrative cli and gui tools to populate that file as part of account creation. That might be simplest and it solves the problem for the full ecosystem of Linux OSes. Then applications just have to start choosing to look at the file.” To Spaleta, this suggests a D-Bus Service, which allows communication between programs.

Ubuntu, another Linux distribution, is also unsure of how to respond, and says it is consulting with its lawyers before making a plan.

California age law does not compute with DB48X

The point is, in putting the onus on operating systems to collect age data, AB 1043 is causing headaches for open source nerds. Both California’s bill and a like-minded bill in Colorado, SB26-051, have drawn the ire of the creators of an open source calculator, DB48X, described as “a project to rebuild and improve upon the ‘legendary’ HP48 family of calculators and RPL programming language, and for modding newer calculators to utilise it.”

Rather than comply, DB48X has opted to restrict access for Californians and Coloradans when (and, in Colorado’s case, if) their laws come into effect. A legal-notice file for the project says “DB48X is probably an operating system under these laws. However, it does not, cannot and will not implement age verification.”

Per PC Gamer, “you know you’ve messed up when you’ve angered the math lot.”

The calculator guys are not alone. Ground News has a roundup of articles expressing variations of grievance. WebProNews says California’s law “forces a surveillance mandate on every developer – including those who can’t comply.” The Daily Economy says “California is embedding age verification directly into digital devices. For those of us concerned with personal liberties, this is an emergency.”

No verification required, actually

PC Gamer also notes the challenges of enforcing a law that means “the job of checking whether people have installed its OS falls onto Californian authorities to deal with.”

“Both Californian and Coloradan bills set out civil fines of $2,500 for unintentional breaches and $7,500 for intentional breaches, but how would the majority of breaches be discovered in the first place?”

Another criticism asks why California does not specify what level or extent of age verification it requires. If it’s just a date of birth, Spaleta says, “a simple dropdown interface may suffice,” meaning “the effectiveness of such a system appears to be based on an honour system.” Self-declaration at the root negates the entire process; this would-be age verification law, in fact, does not mandate age verification at all. Technically, it’s not even age assurance.

California’s law is less than a year away from taking effect, and Colorado’s bill (which more properly labels its goal “age attestation”), if passed, would take effect January 1, 2028. Ironically, the piece ends up lamenting the speed at which new technology is becoming normalized: the laws, it says, are “coming at a time when age verification is being rolled out more widely across the globe and facing stern criticism, such as an open letter from scientists and researchers that notes the many pitfalls of ill-thought-out verification methods.”

The letter in question has provided a common reference for those opposed to age assurance laws and technologies for various reasons. The open source community now joins social media tycoons, privacy advocates and pornographers in opposing such laws, which they say are invasive and dangerous – but which lawmakers insist parents are asking for, as they work to find the right legal model.