A plastic spoon's worth of tiny fragments, quietly accumulating inside the human brain. That image is no longer science fiction....
Voice AI expands attack surface for speaker biometrics as APIs proliferate

Deepfake voices are already a challenge for authentication systems. But the task is getting tougher, as big players pursue voice AI products that could turn speech into a scalable attack surface for identity systems, creating a world in which synthetic speech represents a real identity infrastructure risk.
The latest to join the likes of ElevenLabs and OpenAI in offering APIs for voice biometrics is xAI – the same firm that gave the world Grok the Deepfake Nude Machine. The company has launched standalone Speech-to-Text (STT) and Text-to-Speech (TTS) APIs, “both built on the same infrastructure that powers Grok Voice on mobile apps, Tesla vehicles, and Starlink customer support.”
The market for speech APIs is getting busier. Rapid advances in voice AI are lowering costs and skill barriers for voice cloning, and companies such as Deepgram and AssemblyAI already have established user bases. Others will follow xAI into the market.
The cumulative result is an undermining of trust in voice as an authentication factor – and a need to rethink speaker biometrics in the context of agentic identity.
Grok, say ‘I need help’ in the voice of Morgan Freeman
Grok’s APIs will make it even easier for millions of people to create believable synthetic voices. For text-to-speech, which converts written text into spoken audio, the API “delivers fast, natural speech synthesis with detailed control via speech tags, and is priced at $4.20 per 1 million characters.” It supports 20 languages and five distinct voices, and offers the ability to manipulate delivery with speech tags.
Grok’s record on nefarious use speaks for itself. What are the chances the same user base that flooded X with fake nudes will see the potential for fraud and mischief in the AI’s TTS API? It is a rhetorical question, but it has real-world implications for voice as a reliable biometric modality for identity infrastructure.
In recent weeks, ElevenLabs launched a system to enable companies to deploy AI agents. According to USA Today, the tool “allows teams to convert internal documentation and workflows into conversational agents, without the need for extensive technical development.”
“These agents are designed to follow structured processes, but deliver responses that sound natural within context.”
This month, Microsoft also launched three new foundational AI models, including a voice generation engine, MAI-Voice-1.
Consider how many phone calls already come from bots. Now consider how easily one might use AI to clone the voice of your loved one. The threshold for certainty is disappearing, at least without rigorous voice liveness and continuous monitoring. The question stands to become, is voice worth the risk?
Be careful whose voice offers an answer.
Advancements in voice AI come with widespread risk to biometrics

Deepfake voices are already a challenge for authentication systems. But the task is getting tougher, as big players pursue voice AI products that could turn speech into a scalable attack surface for identity systems, creating a world in which synthetic speech represents a real identity infrastructure risk.
The latest to join the likes of ElevenLabs and OpenAI in offering APIs for voice biometrics is xAI – the same firm that gave the world Grok the Deepfake Nude Machine. Marktechpost reports that the company has launched standalone Speech-to-Text (STT) and Text-to-Speech (TTS) APIs, “both built on the same infrastructure that powers Grok Voice on mobile apps, Tesla vehicles, and Starlink customer support.”
The market for speech APIs is getting busier. Rapid advances in voice AI are lowering costs and skill barriers for voice cloning, and companies such as Deepgram and AssemblyAI already have established user bases. Others will follow xAI into the market.
The cumulative result is an undermining of trust in voice as an authentication factor – and a need to rethink speaker biometrics in the context of agentic identity.
Grok, say ‘I need help’ in the voice of Morgan Freeman
Grok’s APIs will make it even easier for millions of people to create believable synthetic voices. For text-to-speech, which converts written text into spoken audio, the API “delivers fast, natural speech synthesis with detailed control via speech tags, and is priced at $4.20 per 1 million characters.” It supports 20 languages and five distinct voices, and offers the ability to manipulate delivery with speech tags.
Grok’s record on nefarious use speaks for itself. What are the chances the same user base that flooded X with fake nudes will see the potential for fraud and mischief in the AI’s TTS API? It is a rhetorical question, but it has real-world implications for voice as a reliable biometric modality for identity infrastructure.
In recent weeks, ElevenLabs launched a system to enable companies to deploy AI agents. According to USA Today, the tool “allows teams to convert internal documentation and workflows into conversational agents, without the need for extensive technical development.”
“These agents are designed to follow structured processes, but deliver responses that sound natural within context.”
This month, Microsoft also launched three new foundational AI models, including a voice generation engine, MAI-Voice-1.
Consider how many phone calls already come from bots. Now consider how easily one might use AI to clone the voice of your loved one. The threshold for certainty is disappearing, at least without rigorous voice liveness and continuous monitoring. The question stands to become, is voice worth the risk?
Be careful whose voice offers an answer.
AI Robotics: Moving from the lab to the real-world factory floor
Learn the real-world infrastructure and effort required to deploy AI robotics from leaders at Universal Robots, PickNik, and Path Robotics.
The post AI Robotics: Moving from the lab to the real-world factory floor appeared first on The Robot Report.
Celebrities will be able to find and request removal of AI deepfakes on YouTube
YouTube is expanding its AI deepfake monitoring feature to Hollywood – meaning some celebrity AI videos could soon disappear. The platform’s likeness detection feature searches YouTube for AI deepfake content and flags it for public figures enrolled in the program. Public figures can use it to keep track of AI content on YouTube of themselves […]
Study finds AI fraud losses decline, but the risks are growing

While identity fraud losses have stabilized and scam losses have declined, new account fraud has surged, fueled in part by AI, which “is reshaping the fraud landscape,” says Javelin Strategy & Research’s 2026 annual identity fraud study, The Illusion of Progress.
“Artificial intelligence is at the center of these shifts in fraud and scam losses,” the report says. “Financial institutions are increasingly investing in AI and autonomous technology to improve fraud detection,” but, “at the same time, fraudsters are adopting the same tools, at faster paces, to broaden their reach with more convincing scams and efficient fraud schemes.”
“To fight the rising misuse of AI by criminals, financial organizations need to update fraud controls, enhance collaboration, and treat fraud detection as an ongoing process rather than a one-time decision,” the report says.
The study, conducted with support from TransUnion, Fiserv, Plaid, and Mastercard, reported that combined fraud and scam losses totaled $38 billion in 2025, a $9 billion reduction from 2024. The number of victims also decreased by four million, affecting 36 million in 2025.
Traditional identity fraud losses remained steady at $27.3 billion in 2025, affecting 18 million victims, the report says, with the number of victims increasing across all fraud types.
New-account fraud experienced the sharpest rise in the number of victims, a 31 percent increase from 4.2 million in 2024, to 5.4 million in 2025, suggesting that the stability in reported losses might mask shifts in fraud activity.
Account takeover, which makes up a part of both existing card fraud and non-card fraud, saw an 18 percent increase in victims from 5.1 million in 2024, to 6 million in 2025. Reduced losses do not translate to reduced risk.
Scam losses fell 45 percent year-over-year, from $19.5 billion in 2024 to $10.7 billion in 2025. This decline is attributed to an increase in scams that don’t result in immediate monetary loss but can lead to future fraud or account compromises.
Suzanne Sando, lead analyst in fraud management at Javelin and the report’s author, emphasized that the decline in scam-related losses doesn’t mean the overall risk has decreased.
“Reduced losses do not mean reduced risk,” Sando said. “A 45 percent drop may look like progress, but scammers are increasingly stealing information instead of money, setting up future fraud that doesn’t show up in today’s loss figures.”
The study highlighted a troubling 31 percent rise in new account fraud, which impacted 5.4 million victims and caused $7 billion in losses. Despite a 41 percent decline in romance scams, they continue to affect vulnerable populations.
Account takeover fraud, while seeing a slight decrease in losses, remained the costliest fraud type, with an 18 percent increase in victims.
“Fraud losses may look stable on paper, but the risk hasn’t gone away,” Sando told Biometric Update. “Without stronger identity, smarter controls, and real intelligence sharing and collaboration, today’s progress will prove to be nothing more than a temporary pause.”
Sando added that “we’ve reached an inflection point where fraudsters are outpacing banks in AI adoption. Fraudsters don’t have to worry about the same compliance or regulatory guardrails that financial services organizations do, and that gives them an advantage that financial services cannot afford to ignore.”
The report also points to a significant shift in consumer behavior, with many increasingly suspicious of fraud alerts from financial institutions.
“Despite decreases in scam losses … the rise in AI-fueled bank impersonation and purchase scams have made consumers skeptical of legitimate communications from their financial institutions,” the report says, noting that “many consumers now hesitate to engage with fraud alerts, unintentionally exposing themselves to serious fraud risk, simply because they are unsure of the alert’s legitimacy.”
“The result,” the report says, “is delayed action (or sometimes no action at all), unresolved fraud, and increased consumer frustration and confusion.”
As AI-driven scams grow in sophistication, consumers are more likely to distrust messages from their banks. In fact, 55 percent of consumers who received a fraud alert from their financial institution chose not to respond, fearing that the alert itself might be a scam.
AI is being used by financial institutions to enhance fraud detection, but concerns are mounting that current fraud controls may not be sufficient to cope with the rapidly evolving threat landscape.
Financial institutions are urged to implement continuous monitoring and collaborate more effectively to address AI-driven fraud.
The study serves as a wake-up call for financial institutions, urging them to strengthen fraud defenses and adapt to the changing realities of fraud driven by AI.
Last November, Javelin said “the fraud landscape in 2026 and beyond will experience significant changes due to … rapidly evolving mule activity, the importance of distinguishing between agentic commerce bots and malicious automation, and the imminent threat that phantom hacker scams pose to consumers across all demographics.”
“These trends require financial institutions to be flexible and willing to adapt to new technologies that provide stronger defenses against some of the costliest forms of fraud and scams.”
Last month the company warned that “escalating concerns about cyberattacks linked to Iranian-backed attackers and hacktivists put U.S. banks and other critical infrastructure sectors on high alert.”
The company also said that “criminals have set their sights on vulnerabilities during the payment transaction process through the growing adoption of real-time payments and digital wallets.”
As the line between legitimate and fraudulent activities continues to blur, financial services must keep innovating to stay ahead of fraudsters.
























































