Tag: DEEPFAKES
Fake media is an everyday problem in the age of GenAI

The world at large is quickly learning about topics once confined to the dorkier corners of identity, as deepfake technology reshapes perceptible reality at a swift clip. Whether it’s the easily-generated video deepfakes that OpenAI’s Sora makes possible, or the algorithmically generated voices on scam calls, synthetic media is now good enough to fool most people, and topics like liveness detection, deepfake detection and likeness protection are suddenly urgent priorities.
The biometrics industry has been on the case for years, but as fraud attacks have evolved, so too has the need for innovative protections.
Deepfake top fraud threat according to new Regula report
According to new data from vendor Regula, the three most frequent fraud types globally in 2025 are identity spoofing, biometric fraud and deepfake fraud.
“The battlefield has moved toward impersonation: the deliberate manipulation of identity signals – biometric, video and synthetic – that target both human verifiers and machine algorithms,” says the report, The Future of Identity Verification: 5 Threats and 5 Opportunities. For highly affected organizations with losses exceeding 5 million dollars, deepfakes and synthetic IDs top the list as the most common fraud types.
The key takeaway on deepfakes from Regula’s report is that “deepfakes are no longer fringe threats – they are the main driver of identity fraud at scale. For firms exposed to high-volume, high-loss deepfake attacks, presentation attack detection (PAD) and advanced liveness have become baseline requirements, not optional add-ons.”
Execs reflect on growing deepfake threat
An article from Asian Banking and Finance backs this up with reporting from 2025 Singapore Fintech Festival, where executives from Mastercard, DBS, Swift, Ant International and Sumitomo Mitsui Banking Corporation (SMBC) identified AI-driven cyberattacks, deepfakes, data breaches, and algorithm failures as the biggest threats to the industry.
Craig Vosburg, chief services officer at Mastercard, points out that, in absolute dollar terms, cybercrime, if cybercrime were a country, “it would be the third largest GDP in the world.” Vosburg recommends partnerships and investment in technological safeguards as the best place to start mounting defenses.
Ant International’s Chief Executive Officer, Yang Peng, says that while his firm detected the first deepfake in its system in January 2024, it now sees 20,000 deepfake attack attempts a day globally, mostly targeting Asia.
Swift Chief Executive Officer Javier Pérez-Tasso says technology is often the easy part. “It’s governance, controls, frameworks, the right processes, standardisation, upskilling and reskilling that are fundamental for scaling AI safely.” He calls the standardization of AI frameworks and cross-border cooperation “the foundation of the future industry.”
“With AI, we are going to have domestic frameworks that will need to interoperate globally. Public and private sector collaboration will be fundamental.”
Regardless of the extent of the threat, enthusiasm for AI applications still appears to be high: Yoshihiro Hyakutome, deputy president executive officer and co-head of global banking unit at SMBC, says the bank “created an avatar CEO so that junior staff can reach out to this avatar CEO and ask tough questions. This AI will challenge them. It’s basically enabling employees to feel that AI is their boss, their colleague, their co-worker.”
Sora 2 deepfakes flood industries from law to healthcare
An article in Dark Reading focuses on the risks posed by Sora 2, OpenAI’s generative AI tool for video. The author insists on the standard hedge, proclaiming that “there are plenty of beneficial use cases for GenAI in terms of promoting creativity, speed, and scale” – ignoring that many in the creative industries have had their work unlawfully used to train AI models, and are not happy about it.
Nonetheless, having genuflected to OpenAI’s company line, they go on to explore the imminent risks Sora 2 presents for individuals and enterprises. “Attackers can abuse Sora 2 to enhance social engineering tactics and manipulate even some of the more adept users with convincing deepfakes. OpenAI already had to tighten the guardrails against deepfakes in Sora 2 after the actors’ union SAG-AFTRA lodged a complaint.”
The piece quotes Ben Colman, CEO of deepfake detection firm Reality Defender, who says that in the absence of regulations, the risk of identity fraud, financial fraud and threats to public safety are high and getting higher, as OpenAI pushes its products into an unprepared regulatory environment.
Colman notes improvements to voice authenticity in Sora 2 as a particular concern, as more realistic cadence and expression make it easier to simulate longer conversations. Begin thinking through the risks, and the threat is apparent: deepfake bosses call you and tell you to wire money to an offshore account. Deepfake healthcare practitioners could defraud patients. Legal teams are no longer able to be certain if a piece of video evidence is real or synthesized.
None of this is causing the tech industry to slow its pace: Google’s generative video tool, Veo, is said to be able to produce even more realistic video than Sora 2.
Reality Defender looks at 3 deepfake threat vectors noted by MAS
Reality Defender is itself a frequent resource for insights on deepfake developments, and a new blog post from Chief Revenue Officer Brian Levin reflects on three factors driving the deepfake threat to financial institutions, as identified by the Monetary Authority of Singapore (MAS).
The first finding is that biometric authentication systems are being defeated. Levin cites examples from Indonesia, Thailand and Vietnam of biometric systems being beaten with AI-generated deepfake photos and stolen assets.
“MAS recommends implementing liveness detection techniques that analyze motion, texture, and 3D depth during authentication,” he says. “Organizations should prompt users to perform specific actions during verification rather than accepting static images. For non-facial biometrics like fingerprint or palm vein recognition, detection techniques must be tailored to identify synthetic reproductions of those specific characteristics.”
The second finding is that “deepfake technology amplifies traditional social engineering by creating hyper-realistic impersonations of executives, colleagues, and trusted contacts during video calls and voice communications.” The now-classic example comes from Hong Kong, when a meeting was hijacked with an injection attack and deepfaked bosses ordered an Arup employee to transfer 25 million dollars to outside accounts. The current rash of employment fraud also falls into this category, as deepfake candidates infiltrate the remote recruitment processes.
“MAS emphasizes that organizations must implement multi-factor authentication for high-privilege accounts and high-risk activities, including wire transfers and access to sensitive data,” Levin says.
The final finding from MAS is that “misinformation campaigns target market confidence.” Scams target public figures or broadcast fake footage of events to manipulate markets. As such, “MAS recommends implementing monitoring tools to detect deepfake-based brand abuse and impersonation attempts across digital channels, including social media, websites, video platforms and news sources.”
The overarching message is that deepfakes are not a tomorrow problem, and organizations must act now to bolster defenses.
Secured Signing integrates Reality Defender deepfake detection
Secured Signing, which offers digital signature and remote online notarization (RON) services, has announced that it will integrate Reality Defender’s deepfake detection layer into its security measures to launch an exclusive deepfake detection tool called Realify.
A release says Realify uses Reality Defender’s comprehensive, multi-modal detection technology to analyze a signer’s video and audio, making sure they are authentic before and during an online meeting.
Its automatic deepfake verification process features simple UX for identity verification through a facial scan, which can be repeated at any time throughout the process. Realify’s AI models analyze facial and audio data to generate a real-time risk score and provide a detailed report.
“We live in an era where the authenticity of digital interactions is constantly under attack,” says Ben Colman, CEO of Reality Defender. “Sophisticated AI-generated media and deepfakes pose a direct threat to the high-trust processes that underpin our economy, such as legal agreements and notarizations. This partnership with Secured Signing is a critical step in building a broader ecosystem of trust.”
Reality Defender is celebrating its recent induction into the J.P. Morgan 2025 Hall of Innovation, recognizing its innovations and measurable business impact.
Pindrop partners with BT to detect audio deepfakes for UK enterprise customers
On the voice deepfake file, Pindrop has announced a strategic partnership with BT Group that will see it deploy its voice security solutions to BT Group’s enterprise customers across the UK.
A release says BT will integrate Pindrop Protect and Pindrop Passport, the company’s patented authentication and fraud detection technologies, into its enterprise portfolio, with the aim of reducing operational costs while enhancing security posture.
“With 1 in 106 calls already showing signs of deepfake activity, threats like synthetic speech and agentic AI are rewriting the fraud playbook,” says Bucky Wallace, chief revenue officer for Pindrop. “Together with BT, we’re giving UK enterprises a modern defence – advanced voice intelligence that continuously adapts, spots risk earlier, and future-proofs contact centres for both security and customer experience.”
Pindrop technology combines device recognition, “phoneprinting” technology, behavioral analysis and synthetic deepfake detection. Its flexible integration architecture was a factor in the partnership, helping ensure that BT can support existing and prospective customers across varied contact centre environments.
More deepfake detection tools come online
3DiVi has launched its 3DiVi Deepfake Detector online demo, which a release says “allows users to upload videos or connect a live camera stream and determine frame by frame whether the content is a deepfake.” It offers users the opportunity for unlimited testing, and is freely available to anyone.
“More than a demo, 3DiVi Deepfake Detector is an API-ready module that can be seamlessly integrated into existing security, media, and verification platforms, enabling automated detection at scale,” the company says.
A team from Australia’s national science agency, CSIRO, Federation University Australia and RMIT University has developed a method to improve the detection of audio deepfakes. A news release says the technique, called Rehearsal with Auxiliary-Informed Sampling (RAIS), automatically selects and stores a “small, but diverse set of past examples, including hidden audio traits that humans may not even notice,” to help the algorithm internalize new traits without forgetting old ones. The goal is a richer mix of training data.
“RAIS employs a label generation network to produce auxiliary labels, guiding diverse sample selection for the memory buffer. Extensive experiments show RAIS outperforms state-of-the-art methods, achieving an average Equal Error Rate (EER) of 1.953 percent across five experiences.”























