Category: Technology
Fake media is an everyday problem in the age of GenAI

The world at large is quickly learning about topics once confined to the dorkier corners of identity, as deepfake technology reshapes perceptible reality at a swift clip. Whether it’s the easily-generated video deepfakes that OpenAI’s Sora makes possible, or the algorithmically generated voices on scam calls, synthetic media is now good enough to fool most people, and topics like liveness detection, deepfake detection and likeness protection are suddenly urgent priorities.
The biometrics industry has been on the case for years, but as fraud attacks have evolved, so too has the need for innovative protections.
Deepfake top fraud threat according to new Regula report
According to new data from vendor Regula, the three most frequent fraud types globally in 2025 are identity spoofing, biometric fraud and deepfake fraud.
“The battlefield has moved toward impersonation: the deliberate manipulation of identity signals – biometric, video and synthetic – that target both human verifiers and machine algorithms,” says the report, The Future of Identity Verification: 5 Threats and 5 Opportunities. For highly affected organizations with losses exceeding 5 million dollars, deepfakes and synthetic IDs top the list as the most common fraud types.
The key takeaway on deepfakes from Regula’s report is that “deepfakes are no longer fringe threats – they are the main driver of identity fraud at scale. For firms exposed to high-volume, high-loss deepfake attacks, presentation attack detection (PAD) and advanced liveness have become baseline requirements, not optional add-ons.”
Execs reflect on growing deepfake threat
An article from Asian Banking and Finance backs this up with reporting from 2025 Singapore Fintech Festival, where executives from Mastercard, DBS, Swift, Ant International and Sumitomo Mitsui Banking Corporation (SMBC) identified AI-driven cyberattacks, deepfakes, data breaches, and algorithm failures as the biggest threats to the industry.
Craig Vosburg, chief services officer at Mastercard, points out that, in absolute dollar terms, cybercrime, if cybercrime were a country, “it would be the third largest GDP in the world.” Vosburg recommends partnerships and investment in technological safeguards as the best place to start mounting defenses.
Ant International’s Chief Executive Officer, Yang Peng, says that while his firm detected the first deepfake in its system in January 2024, it now sees 20,000 deepfake attack attempts a day globally, mostly targeting Asia.
Swift Chief Executive Officer Javier Pérez-Tasso says technology is often the easy part. “It’s governance, controls, frameworks, the right processes, standardisation, upskilling and reskilling that are fundamental for scaling AI safely.” He calls the standardization of AI frameworks and cross-border cooperation “the foundation of the future industry.”
“With AI, we are going to have domestic frameworks that will need to interoperate globally. Public and private sector collaboration will be fundamental.”
Regardless of the extent of the threat, enthusiasm for AI applications still appears to be high: Yoshihiro Hyakutome, deputy president executive officer and co-head of global banking unit at SMBC, says the bank “created an avatar CEO so that junior staff can reach out to this avatar CEO and ask tough questions. This AI will challenge them. It’s basically enabling employees to feel that AI is their boss, their colleague, their co-worker.”
Sora 2 deepfakes flood industries from law to healthcare
An article in Dark Reading focuses on the risks posed by Sora 2, OpenAI’s generative AI tool for video. The author insists on the standard hedge, proclaiming that “there are plenty of beneficial use cases for GenAI in terms of promoting creativity, speed, and scale” – ignoring that many in the creative industries have had their work unlawfully used to train AI models, and are not happy about it.
Nonetheless, having genuflected to OpenAI’s company line, they go on to explore the imminent risks Sora 2 presents for individuals and enterprises. “Attackers can abuse Sora 2 to enhance social engineering tactics and manipulate even some of the more adept users with convincing deepfakes. OpenAI already had to tighten the guardrails against deepfakes in Sora 2 after the actors’ union SAG-AFTRA lodged a complaint.”
The piece quotes Ben Colman, CEO of deepfake detection firm Reality Defender, who says that in the absence of regulations, the risk of identity fraud, financial fraud and threats to public safety are high and getting higher, as OpenAI pushes its products into an unprepared regulatory environment.
Colman notes improvements to voice authenticity in Sora 2 as a particular concern, as more realistic cadence and expression make it easier to simulate longer conversations. Begin thinking through the risks, and the threat is apparent: deepfake bosses call you and tell you to wire money to an offshore account. Deepfake healthcare practitioners could defraud patients. Legal teams are no longer able to be certain if a piece of video evidence is real or synthesized.
None of this is causing the tech industry to slow its pace: Google’s generative video tool, Veo, is said to be able to produce even more realistic video than Sora 2.
Reality Defender looks at 3 deepfake threat vectors noted by MAS
Reality Defender is itself a frequent resource for insights on deepfake developments, and a new blog post from Chief Revenue Officer Brian Levin reflects on three factors driving the deepfake threat to financial institutions, as identified by the Monetary Authority of Singapore (MAS).
The first finding is that biometric authentication systems are being defeated. Levin cites examples from Indonesia, Thailand and Vietnam of biometric systems being beaten with AI-generated deepfake photos and stolen assets.
“MAS recommends implementing liveness detection techniques that analyze motion, texture, and 3D depth during authentication,” he says. “Organizations should prompt users to perform specific actions during verification rather than accepting static images. For non-facial biometrics like fingerprint or palm vein recognition, detection techniques must be tailored to identify synthetic reproductions of those specific characteristics.”
The second finding is that “deepfake technology amplifies traditional social engineering by creating hyper-realistic impersonations of executives, colleagues, and trusted contacts during video calls and voice communications.” The now-classic example comes from Hong Kong, when a meeting was hijacked with an injection attack and deepfaked bosses ordered an Arup employee to transfer 25 million dollars to outside accounts. The current rash of employment fraud also falls into this category, as deepfake candidates infiltrate the remote recruitment processes.
“MAS emphasizes that organizations must implement multi-factor authentication for high-privilege accounts and high-risk activities, including wire transfers and access to sensitive data,” Levin says.
The final finding from MAS is that “misinformation campaigns target market confidence.” Scams target public figures or broadcast fake footage of events to manipulate markets. As such, “MAS recommends implementing monitoring tools to detect deepfake-based brand abuse and impersonation attempts across digital channels, including social media, websites, video platforms and news sources.”
The overarching message is that deepfakes are not a tomorrow problem, and organizations must act now to bolster defenses.
Secured Signing integrates Reality Defender deepfake detection
Secured Signing, which offers digital signature and remote online notarization (RON) services, has announced that it will integrate Reality Defender’s deepfake detection layer into its security measures to launch an exclusive deepfake detection tool called Realify.
A release says Realify uses Reality Defender’s comprehensive, multi-modal detection technology to analyze a signer’s video and audio, making sure they are authentic before and during an online meeting.
Its automatic deepfake verification process features simple UX for identity verification through a facial scan, which can be repeated at any time throughout the process. Realify’s AI models analyze facial and audio data to generate a real-time risk score and provide a detailed report.
“We live in an era where the authenticity of digital interactions is constantly under attack,” says Ben Colman, CEO of Reality Defender. “Sophisticated AI-generated media and deepfakes pose a direct threat to the high-trust processes that underpin our economy, such as legal agreements and notarizations. This partnership with Secured Signing is a critical step in building a broader ecosystem of trust.”
Reality Defender is celebrating its recent induction into the J.P. Morgan 2025 Hall of Innovation, recognizing its innovations and measurable business impact.
Pindrop partners with BT to detect audio deepfakes for UK enterprise customers
On the voice deepfake file, Pindrop has announced a strategic partnership with BT Group that will see it deploy its voice security solutions to BT Group’s enterprise customers across the UK.
A release says BT will integrate Pindrop Protect and Pindrop Passport, the company’s patented authentication and fraud detection technologies, into its enterprise portfolio, with the aim of reducing operational costs while enhancing security posture.
“With 1 in 106 calls already showing signs of deepfake activity, threats like synthetic speech and agentic AI are rewriting the fraud playbook,” says Bucky Wallace, chief revenue officer for Pindrop. “Together with BT, we’re giving UK enterprises a modern defence – advanced voice intelligence that continuously adapts, spots risk earlier, and future-proofs contact centres for both security and customer experience.”
Pindrop technology combines device recognition, “phoneprinting” technology, behavioral analysis and synthetic deepfake detection. Its flexible integration architecture was a factor in the partnership, helping ensure that BT can support existing and prospective customers across varied contact centre environments.
More deepfake detection tools come online
3DiVi has launched its 3DiVi Deepfake Detector online demo, which a release says “allows users to upload videos or connect a live camera stream and determine frame by frame whether the content is a deepfake.” It offers users the opportunity for unlimited testing, and is freely available to anyone.
“More than a demo, 3DiVi Deepfake Detector is an API-ready module that can be seamlessly integrated into existing security, media, and verification platforms, enabling automated detection at scale,” the company says.
A team from Australia’s national science agency, CSIRO, Federation University Australia and RMIT University has developed a method to improve the detection of audio deepfakes. A news release says the technique, called Rehearsal with Auxiliary-Informed Sampling (RAIS), automatically selects and stores a “small, but diverse set of past examples, including hidden audio traits that humans may not even notice,” to help the algorithm internalize new traits without forgetting old ones. The goal is a richer mix of training data.
“RAIS employs a label generation network to produce auxiliary labels, guiding diverse sample selection for the memory buffer. Extensive experiments show RAIS outperforms state-of-the-art methods, achieving an average Equal Error Rate (EER) of 1.953 percent across five experiences.”
Should we be afraid of consensus? Pluribus and the horrors of mainstream happiness
Pluribus explores the tyranny of mainstream thought and how happiness itself is dangerously considered the main goal of all human existence.
The post Should we be afraid of consensus? Pluribus and the horrors of mainstream happiness appeared first on THE ETHICS CENTRE.
Researchers isolate memorization from problem-solving in AI
Basic arithmetic ability lives in the memorization pathways, not logic circuits.
Cryptoqueen who fled China for London mansion jailed over £5bn Bitcoin stash
Qian Zhimin bought cryptocurrency using funds stolen from thousands of Chinese pensioners, say police.
Washington state confronts expanding surveillance system as Flock draws fire

In Washington state, a series of court rulings, public records disclosures, investigative reports, and municipal policy decisions have converged to reveal the scale of the Flock Safety camera network and the complex ways its data has been shared across agencies, inside and outside the state.
The developments have heightened concerns about surveillance, civil immigration enforcement, reproductive freedom, and the limits of local control over new policing technologies.
The clearest shift came from a Skagit County Superior Court ruling last week in which Judge Elizabeth Yost Neidzwski ordered police departments to release images captured by Flock Safety cameras under Washington’s Public Records Act.
The ruling held that the vehicle images, collected automatically as cars pass roadside camera fixtures, constitute public records regardless of whether they were ever used in specific investigations.
The case originated from a records request submitted by a private citizen who sought access to a half-hour of data from Flock cameras operated by police departments in Sedro-Woolley and Stanwood.
The cities sued to block the request, arguing that the images were exempt, and warning that releasing them might infringe individual privacy or undermine investigative tools.
The court rejected those claims and pointed to the scale of the surveillance, which captures the movements of thousands of drivers not suspected of wrongdoing.
The judge’s ruling dealt directly with the nature of modern automated surveillance.
Unlike red light cameras or speeding sensors which activate when a law is triggered, Flock cameras continuously record every car that passes, producing streams of timestamped images that can show not only plates, but also vehicle features and occupants.
The breadth of that record was central to the court’s conclusion that the data is public. The ruling immediately raised questions for dozens of other police agencies across Washington that now operate similar systems.
Sedro-Woolley and Stanwood had already deactivated their cameras before the decision while the dispute was pending, but the ruling makes voluntary suspension more consequential.
Police departments around the state are now consulting with legal counsel to determine whether continuing to use Flock cameras means they must accept that their footage is subject to broad public disclosure.
Sedro-Woolley had installed its first Flock cameras earlier this year and emphasized their value in finding stolen vehicles, missing persons, and suspected offenders. City officials highlighted early successes, including a robbery arrest and the recovery of an Alzheimer’s patient who had gone missing.
The system was also presented as a cost-efficient enhancement to city policing, as Flock cameras require a fraction of the annual cost of hiring additional staff. But those benefits have been overshadowed by the legal and ethical questions surrounding data access and control.
For now, the physical camera hardware remains in place in Sedro-Woolley, but the system has been disabled and is not capturing images.
The concerns extend far beyond the legal question of whether Flock images are public records. In October, the University of Washington Center for Human Rights released research showing that local Flock systems across the state had been accessed by Border Patrol and other federal agencies involved in immigration enforcement in ways that may violate state law.
Washington’s Keep Washington Working Act, passed in 2019, restricts law enforcement agencies from using state or local resources to support civil immigration enforcement. However, Flock system records obtained by researchers show that Border Patrol ran thousands of searches on data from at least 31 Washington jurisdictions.
Some of those searches occurred in cities that had not knowingly granted access. Others appear to have resulted from how Flock’s network sharing features were configured.
The University of Washington researchers described three kinds of access. The first is direct sharing, where one police agency grants access to another through a one-to-one network connection.
The second is indirect or back door access, where agencies gain entry because another organization in the network has enabled broad sharing rules.
The third is side door access, where local police run searches on behalf of outside agencies, including federal immigration authorities. In some cities, audits showed instances where local officers conducted searches tagged with terms such as “ICE or “immigration” despite state restrictions.
Auburn, which operates a network of Flock cameras, released a statement on October 20 acknowledging that federal immigration officers had gained access to its system through the Flock network’s national lookup function.
The department said it was not aware access had been granted and immediately disabled the feature once notified. Auburn now conducts monthly reviews of all external access to its Flock network and has pledged to permanently revoke access for any agency found to be using the system for immigration enforcement.
The city emphasized that it remains committed to lawful policing while respecting privacy rights and state law.
Other jurisdictions have taken similar steps. Renton reported discovering that federal immigration agents could search its Flock data through another state’s law enforcement agency. The department suspended all external sharing until the source of access could be determined.
In Lakewood and several other cities, police officials said they were unaware of how access had been enabled and moved to restrict or review sharing settings.
The rapid expansion of Flock’s business model has complicated local oversight. The company now operates tens of thousands of cameras nationwide and encourages agencies to share data regionally or across state lines.
Flock’s pitch to police departments has emphasized cost efficiency, ease of installation, and investigative gains.
But as the University of Washington Human Rights report notes, the network’s scale makes it difficult for agencies to determine who has access to their data.
The company allows local administrators to configure sharing, but many police departments either do not fully understand the implications of those settings or rely on default configurations that enable broader access than intended.
Flock’s network also incorporates private entities, including neighborhood associations, shopping malls, and big-box retail properties. Many of those private customers share their camera data with police.
In some cases, police agencies have access to private camera networks without disclosing those relationships publicly. The layered nature of these networks makes it possible for a search initiated anywhere in the country to draw on data from Washington without clear visibility into how that connection was formed.
The debate in Washington is unfolding at a time when reproductive rights and gender-affirming healthcare protections are being tested nationwide. Privacy advocates warn that travel pattern data derived from license plate surveillance could be used to target people seeking care across state lines.
Despite the public pressure and legal scrutiny, many police departments continue to defend the technology. Chiefs in Sedro-Woolley, Mount Vernon, and other cities describe Flock cameras as valuable public safety tools that do not use facial recognition and capture only vehicles, not biometric identifiers.
They frame the cameras as extensions of routine policing work, arguing that the data is only accessed during investigations and automatically deleted after 30 days. But those assurances assume users understand and control the system as configured.
The University of Washington report shows several agencies either never ran network audits or did not know how to access them, leaving oversight gaps.
The statewide conversation is now shifting toward legislative action. Other states, including Virginia and Illinois, have enacted limits on data retention and cross-jurisdiction sharing. Washington currently has no equivalent statutory framework governing local use of license plate surveillance systems.
Governor Bob Ferguson recently issued an executive order emphasizing the importance of protecting private data held by state agencies and reaffirming Washington’s commitments to immigrant rights. But the order does not directly apply to municipal police departments.
State legislators are signaling interest in addressing the issue in the upcoming session.
iRobot’s revenue has tanked and it’s almost out of cash
Things continue to look bleak for the original robot vacuum maker. iRobot’s third-quarter results, released last week, show that revenue is down and “well below our internal expectations due to continuing market headwinds, ongoing production delays, and unforeseen shipping disruptions,” said Gary Cohen, iRobot CEO, in a press release. This meant they had to spend […]

























