Tag: DEEPFAKES
AI fraud pushing pace on need for advanced deepfake detection tools

A blog post for GetReal Security by Dr. Edward Amoros, CEO of TAG Infosphere and research professor at NYU, looks at how and why information executives can justify investment in deepfake detection.
“Executives fund technologies based on initiatives that demonstrate measurable risk reduction, economic value, and alignment with governance and compliance objectives,” Amoros says. “Deepfake and continuous identity protection programs must therefore be framed not as experimental controls, but as ROI-driven investments.”
Amoros says traditional cyber risk frameworks such as FAIR and ISO/IEC 27005 are applicable to deepfake and synthetic media threats, “even though the attack vector is new.”
“In both frameworks, risk is defined as a function of loss event frequency and loss magnitude. Deepfakes map cleanly into these structures once they are properly categorized as identity-based loss events rather than media anomalies.”
This reframing allows chief information security officers (CISOs) to “integrate deepfake risk into existing enterprise risk registers, rather than treating it as a parallel or experimental concern.”
Moreover, it makes economic sense. “On the investment side, deepfake protection costs are typically modest relative to these potential losses,” Amoros says. “Detection platforms, continuous identity assurance tooling, and integration into collaboration environments represent a fraction of what organizations already spend on IAM, SOC operations, or fraud prevention. The economic argument becomes one of loss avoidance rather than productivity enhancement.”
There is an increasing need to maintain agile compliance postures and auditability. But, “to sustain executive support, CISOs must define metrics that move beyond simple detection counts. Useful measures include detection accuracy across voice and video channels, coverage across high-risk workflows, false positive rates, and mean time to response for identity anomalies.”
Amoros says the call to action is clear. “Enterprise security teams should begin to treat identity authenticity as a measurable control objective. When CISOs can quantify identity protection, they can justify it, and when they justify it, they can finally defend it at scale.”
DataVisor identifies AI readiness gap between concern and defense
DataVisor’s 2026 Fraud & AML Executive Report reveals a readiness gap between concern over AI-driven fraud and financial institutions’ ability to defend against it.
According to a release, the report shows 74 percent of surveyed senior fraud and AML leaders across banks, credit unions, fintechs and digital payments platforms citing AI-driven fraud as a top threat. But 67 percent say their organizations lack the infrastructure to deploy effective AI defenses.
Fragmented data, legacy detection models, organizational silos and outdated governance compromise defenses.
“Financial institutions are facing attackers that operate at machine speed, but many defenses still operate at legacy operational speed,” says Yinglian Xie, CEO of DataVisor. “Closing the AI Readiness Gap requires modern foundations – unified data, adaptive machine learning, adoption of LLM-based AI agents, and operational models designed for continuous, real-time response. The organizations that modernize their infrastructure and workflows will be best positioned to stay ahead.
Paper points to future in which AI cloning tools make voice biometrics obsolete
The Bloomsbury Intelligence and Security Institute (BISI) has collaborated with CyberWomen Groups C.I.C. on the publication of “When Voice Is No Longer Proof: AI Vocal Cloning and the Limits of Voice-Based Authentication,” a paper by Hannah-Rose Shearman, a student researcher in cybersecurity and digital forensics.
CyberWomen Groups C.I.C. is “a student-led initiative dedicated to diversifying STEM by supporting and connecting university students interested in or studying cybersecurity, regardless of gender identity.”
Shearman’s paper looks at how the increasing accessibility and realism of synthetic voice technology undermines the effectiveness of voice biometrics, creating significant vulnerabilities in voice-based verification, and driving high-risk sectors such as finance and public services away from voice as a reliable biometric identifier.
“Academic and industry research has demonstrated that modern voice cloning tools can replicate vocal characteristics with high fidelity and bypass traditional authentication controls,” Shearman writes.
“Techniques that were previously resource-intensive and difficult to scale can now be performed with minimal technical expertise, owing to the increasing accessibility of commercial voice synthesis platforms such as ElevenLabs, thereby weakening long-standing assumptions about the uniqueness and reliability of voice as a standalone proof of identity.”
In short, voice used to be considered secure. But the tools for replicating voices, or generating fake ones, have gotten good enough that voices can no longer be trusted. And with social media, there is ample reference material for cloning.
That has major implications for the “political, operational, security, and economic risk landscape for institutions that rely on voice-based authentication.”
“Elevated fraud activity results in direct financial losses and increased costs for investigation, remediation, and customer support,” says Shearman. Eventually, these outweigh the benefits of using voice authentication in the first place.
“At a political and regulatory level, the growing divergence between institutional authentication practices and emerging AI capabilities exposes gaps in consumer protection and accountability frameworks.”
“While organisations continue to deploy voice biometrics as a trusted control, victims of AI-enabled impersonation encounter limited avenues for redress in jurisdictions where legal frameworks have not yet adapted to address synthetic voice misuse.”
The paper concludes with a forecast, which calls for higher volumes of impersonation attempts and related pressures over the next 12 months as AI voice cloning continues to develop – and, over the long term, continued “erosion of trust in voice as a primary identity control.”
Online dating at risk as romance scams, deepfakes infiltrate platforms

Online dating sites are being flooded with deepfakes and AI content, making it hard for users to distinguish real matches from fraud bots. At the same time, some users are discovering that a corporeal body is more of a “nice to have,” as they turn to AI companions generated by large language models (LLM) over flesh-and-blood partners.
New data from Sumsub shows that so-called “romanceslop” is souring the experience for many users, resulting in what a release the biometrics and anti-fraud firm describes as a hollowing-out of once useful apps. Thirty percent of those who participated in a survey say their dating experience has been “negatively affected by receiving AI-generated content.” Sixty-one percent have already been deceived by fake profiles, or know someone who has, and 84 percent say “deepfaked catfishes and AI content have made it harder to trust people or date successfully.”
Identity fraud is rampant in the online dating world. Recent research shows that 61 percent of people that have used dating apps or websites in the UK have matched with a profile they later discovered, (or strongly suspected) was a bot, scammer or catfish. It’s just as bad across the pond; a report from Politico cites FBI data showing that Americans lost more than $16 billion to cybercrime, including romance scams, in 2024.
Sumsub says “modern widespread and powerful AI tools, like Google’s Nano Banana, have given experienced online fraudsters the means to almost perfect messages and images that can deceive even the savviest romantic.”
Shall I compare thee to an LLM? 36% say yes
Some might worry AI is spoiling love, but others are embracing it. Among the 2000 UK-based respondents, 36 percent have used an AI companion as an alternative to dating apps – and 50 percent of all women are open to the idea.
Meanwhile, 32 percent use AI tools as a dating coach or to write messages. So even for those who aren’t giving their heart to Claude, LLMs are serving as virtual Cupids who can guide them in matters of the heart. As far as profiles go, 60 percent of users believe “some AI-altered content should be allowed” on dating platforms – but 42 percent “have zero tolerance for any image alterations.”
Sumsub notes the paradox at play: “adoption of AI features is growing steadily even as trust and confidence falls.” Regardless, however you slice it, online dating has fundamentally changed.
What hasn’t is the desire for a safe and secure online experience. Eighty one percent of respondents believe dating platforms should be held responsible for malicious content hosted on their platforms. “The imperative on dating apps is clear,” Sumsub says. “Govern AI content to protect online daters, or scramble to react when bad actors cause serious harm.”
Get safe or become obsolete: Sumsub
“Platforms have a clear responsibility to protect users without restricting how they choose to engage online,” says Nikita Marshalkin, head of machine learning at Sumsub. The company has found that many users are willing to accept AI-enhanced dating experiences with appropriate guardrails in place – which means it’s critical how those guardrails are managed.
“Users can’t be blamed for using AI features offered to them, nor can they be expected to manage the resulting wave of AI content without support.” Marshalkin says. “A blanket ban isn’t the answer, but without exhaustive governance and improved user awareness around deepfakes and misleading content, online dating will soon become more trouble than it’s worth.”
“The response from the dating industry is going to be watched very closely by businesses in other sectors who are waking up to how basic verification checks can’t compete with the increasingly sophisticated methods scammers use today.”
Gen Z showing increased preference for in-person dates
Gen Z, as it turns out, is getting wise to the risks associated with online dating, and opting to try and find romance face-to-face. New data from Barclays shows that, with seven in 10 (67 percent) of reports of romance scams originating on dating sites and social media platforms in 2025, 56 percent of Gen Z singles are now prioritizing meeting a partner in-person.
“One in two Gen Z singletons say AI scam concerns have changed how they date online – almost double the 25 per cent national average,” the research says. “In an apparent reversal of a trend towards dating apps in recent years, 56 per cent of Gen Z singles say they’re focusing on meeting a partner in real life, rather than via online dating – significantly higher than the 42 per cent average across generations.”



























