Tag: FRAUD
AI fraud pushing pace on need for advanced deepfake detection tools

A blog post for GetReal Security by Dr. Edward Amoros, CEO of TAG Infosphere and research professor at NYU, looks at how and why information executives can justify investment in deepfake detection.
“Executives fund technologies based on initiatives that demonstrate measurable risk reduction, economic value, and alignment with governance and compliance objectives,” Amoros says. “Deepfake and continuous identity protection programs must therefore be framed not as experimental controls, but as ROI-driven investments.”
Amoros says traditional cyber risk frameworks such as FAIR and ISO/IEC 27005 are applicable to deepfake and synthetic media threats, “even though the attack vector is new.”
“In both frameworks, risk is defined as a function of loss event frequency and loss magnitude. Deepfakes map cleanly into these structures once they are properly categorized as identity-based loss events rather than media anomalies.”
This reframing allows chief information security officers (CISOs) to “integrate deepfake risk into existing enterprise risk registers, rather than treating it as a parallel or experimental concern.”
Moreover, it makes economic sense. “On the investment side, deepfake protection costs are typically modest relative to these potential losses,” Amoros says. “Detection platforms, continuous identity assurance tooling, and integration into collaboration environments represent a fraction of what organizations already spend on IAM, SOC operations, or fraud prevention. The economic argument becomes one of loss avoidance rather than productivity enhancement.”
There is an increasing need to maintain agile compliance postures and auditability. But, “to sustain executive support, CISOs must define metrics that move beyond simple detection counts. Useful measures include detection accuracy across voice and video channels, coverage across high-risk workflows, false positive rates, and mean time to response for identity anomalies.”
Amoros says the call to action is clear. “Enterprise security teams should begin to treat identity authenticity as a measurable control objective. When CISOs can quantify identity protection, they can justify it, and when they justify it, they can finally defend it at scale.”
DataVisor identifies AI readiness gap between concern and defense
DataVisor’s 2026 Fraud & AML Executive Report reveals a readiness gap between concern over AI-driven fraud and financial institutions’ ability to defend against it.
According to a release, the report shows 74 percent of surveyed senior fraud and AML leaders across banks, credit unions, fintechs and digital payments platforms citing AI-driven fraud as a top threat. But 67 percent say their organizations lack the infrastructure to deploy effective AI defenses.
Fragmented data, legacy detection models, organizational silos and outdated governance compromise defenses.
“Financial institutions are facing attackers that operate at machine speed, but many defenses still operate at legacy operational speed,” says Yinglian Xie, CEO of DataVisor. “Closing the AI Readiness Gap requires modern foundations – unified data, adaptive machine learning, adoption of LLM-based AI agents, and operational models designed for continuous, real-time response. The organizations that modernize their infrastructure and workflows will be best positioned to stay ahead.
Paper points to future in which AI cloning tools make voice biometrics obsolete
The Bloomsbury Intelligence and Security Institute (BISI) has collaborated with CyberWomen Groups C.I.C. on the publication of “When Voice Is No Longer Proof: AI Vocal Cloning and the Limits of Voice-Based Authentication,” a paper by Hannah-Rose Shearman, a student researcher in cybersecurity and digital forensics.
CyberWomen Groups C.I.C. is “a student-led initiative dedicated to diversifying STEM by supporting and connecting university students interested in or studying cybersecurity, regardless of gender identity.”
Shearman’s paper looks at how the increasing accessibility and realism of synthetic voice technology undermines the effectiveness of voice biometrics, creating significant vulnerabilities in voice-based verification, and driving high-risk sectors such as finance and public services away from voice as a reliable biometric identifier.
“Academic and industry research has demonstrated that modern voice cloning tools can replicate vocal characteristics with high fidelity and bypass traditional authentication controls,” Shearman writes.
“Techniques that were previously resource-intensive and difficult to scale can now be performed with minimal technical expertise, owing to the increasing accessibility of commercial voice synthesis platforms such as ElevenLabs, thereby weakening long-standing assumptions about the uniqueness and reliability of voice as a standalone proof of identity.”
In short, voice used to be considered secure. But the tools for replicating voices, or generating fake ones, have gotten good enough that voices can no longer be trusted. And with social media, there is ample reference material for cloning.
That has major implications for the “political, operational, security, and economic risk landscape for institutions that rely on voice-based authentication.”
“Elevated fraud activity results in direct financial losses and increased costs for investigation, remediation, and customer support,” says Shearman. Eventually, these outweigh the benefits of using voice authentication in the first place.
“At a political and regulatory level, the growing divergence between institutional authentication practices and emerging AI capabilities exposes gaps in consumer protection and accountability frameworks.”
“While organisations continue to deploy voice biometrics as a trusted control, victims of AI-enabled impersonation encounter limited avenues for redress in jurisdictions where legal frameworks have not yet adapted to address synthetic voice misuse.”
The paper concludes with a forecast, which calls for higher volumes of impersonation attempts and related pressures over the next 12 months as AI voice cloning continues to develop – and, over the long term, continued “erosion of trust in voice as a primary identity control.”


























