Businesses need biometric orchestration to handle AI fraud, system complexity: Aware

Businesses need biometric orchestration to handle AI fraud, system complexity: Aware
The need for biometric orchestration is nearly universal among businesses using biometrics as they attempt to mitigate the surge in AI-driven fraud, according to a new survey from Aware.

The company’s 24-page report on “The State of Biometric Security in the Age of AI Fraud” shows 98 percent of businesses aligned on that necessity. They are motivated by the frequency of AI-driven fraud attacks, with nearly 50 percent experiencing them within the past year and nearly 90 percent concerned about such attacks targeting their biometric systems. More than half say they lost revenue due to fraud incidents involving AI, including deepfakes, synthetic identities and biometric injection attacks. Nearly as many see such attacks causing damage to their brand and reputation.

Aware surveyed 500 leaders at companies with 50 or more employees using biometrics in the U.S., UK and Brazil to compile the results.

Three-quarters of those surveyed already use biometrics or liveness detection in their fraud prevention strategies, including over 60 percent who use biometrics specifically to prevent identity fraud.

Businesses need to orchestrate these biometrics due to system complexity, which in turn is illustrated by the average use of three biometrics vendors by each business. Nearly 40 percent have multiple biometrics vendors, and nearly 40 percent have either 4 or 5.

More than half annually spend between $138,000 and $688,000 on biometrics to combat fraud, but more than a third spend between $688,000 and $1.4 million per year.

“Organizations are no longer asking if they need biometrics — they’re already managing complex ecosystems and asking how to make them work together,” says Ajay Amlani, CEO of Aware. “Biometric orchestration is emerging as the critical layer that helps security teams stay ahead of AI-driven threats while maintaining performance, accuracy, and user experience. It turns complexity into an advantage by enabling smarter, faster identity decisions.”

Regulatory compliance is another motivating factor, with more than 95 percent seeing a benefit to using biometrics in that area. The second most commonly-seen benefit of biometrics adoption is not preventing fraudulent account creation (58.6 percent), but reducing employee login and MFA fraud (64 percent).

“Deepfakes and AI-powered attacks are fundamentally changing how identity can be manipulated,” says Maxine Most, CEO of The Prism Project, in the company announcement. “To keep pace, organizations must rethink how identity is secured and invest in intelligent systems. Biometric orchestration is a critical layer that brings those systems together into a cohesive, effective defense.”

The Prism Project hosted the Deepfake Summit last month to convene stakeholder in biometrics, digital identity security and deepfake protection.

The report also highlights the importance of independent technology validation, quoting BixeLab CEO Ted Dunstone on the topic. Aware passed a Level 3 biometric liveness detection evaluation by BixeLab in February.

The attacks on Sam Altman are a warning for the AI world

Before allegedly throwing a Molotov cocktail at OpenAI CEO Sam Altman’s home, the 20-year-old accused attacker wrote about his fear that the AI race would cause humans to go extinct, The San Francisco Chronicle found. Two days later, Altman’s home appeared to be targeted a second time, according to The San Francisco Standard. Only a […]

Philippines launches broad crackdown on deepfakes as AI drives identity fraud surge

Philippines launches broad crackdown on deepfakes as AI drives identity fraud surge
In the war against fakery, the Philippines is on the frontline as it launched a coordinated, whole‑of‑government campaign against disinformation, deepfakes and digitally manipulated media.

The government has signed a memorandum of agreement (MOA) that formalizes joint action by the Department of Justice (DOJ), the Presidential Communications Office (PCO) and the Department of Information and Communications Technology (DICT).

The government is launching the initiative in response to a sharp rise in global threat levels. iProov processed more than one million daily authentication checks in 2025 as enterprises confront synthetic identity attacks driven by generative AI, and is deployed in the Philippines and Vietnam.

The company cited Gartner research showing that 62 percent of organizations experienced a deepfake attack in the past year, underscoring how identity manipulation has become a primary entry point for cybercriminals.

iProov’s threat intelligence unit recorded a 2,665‑percent surge in native virtual‑camera attacks and a 300 percent rise in face‑swap attempts last year. Separate research found that only 0.1 percent of consumers could reliably detect deepfakes, reinforcing concerns that the public is increasingly vulnerable to AI‑generated deception.

DOJ Secretary Frederick Vida, PCO Secretary Dave Gomez and DICT Secretary Henry Aguda signed the MOA at the DOJ headquarters in Manila, establishing an inter‑agency framework intended to protect public safety and national security from malicious information operations.

The PCO will lead public information efforts, the DOJ will oversee legal enforcement, and the DICT will provide technological support, cybersecurity capabilities and monitoring systems.

Vida described the MOA as a “pivotal step” in defending the country from digitally mediated falsehoods, warning that deepfakes and coordinated disinformation campaigns can erode trust, sow division and trigger confusion during critical events. He stressed that the government will distinguish between criminal disinformation and constitutionally protected speech.

Aguda said the DICT will focus on cybersecurity, digital infrastructure and coordination with technology platforms, including tools that allow citizens to report false content. “This is no longer just a rumor. Now, lies can look real,” he said, referring to the rapid spread of deepfakes.

AI and deepfakes are warping public safety in Southeast Asia  

For the Philippine government, the new MOA signals a recognition that combating disinformation now requires legal, technological and communications strategies working in tandem — and that the threat landscape is being reshaped by AI at unprecedented speed.

Dominic Forrest, iProov’s CTO, spoke on the urgency in an interview with Cybersecurity Asia. “AI‑driven deepfakes and synthetic identities are no longer theoretical risks,” he told the publication. “They are being actively weaponised to move money and take over accounts.”

He noted that the problem is especially pronounced in Southeast Asia, where explosive digital growth is outpacing regulatory maturity. With millions of new users enrolling in mobile banking, e‑government services and online marketplaces each month, the region has become a prime target for fraudsters leveraging synthetic media and AI‑powered identity attacks.

iProov’s Security Operations Center (iSOC) observed live operations of Grey Nickel, a group that systematically targeted organizations in the Asia-Pacific region. The fraudsters employed advanced face-swap technology, metadata manipulation and injection techniques aimed at bypassing single-frame liveness-based verification systems used by banks and payment platforms.

Forrest says regulators and financial institutions in Asia need to move away from traditional active liveness checks, which generative AI can now mimic with convincing ease. These methods are also powerless against injection attacks, where fraudsters bypass the camera entirely. He argues that passive liveness — such as iProov’s Dynamic Liveness technology — offers a more resilient alternative.

iProov’s technology has gained traction across government and finance including deployments with UnionDigital Bank in the Philippines, Vietnam’s MoMo platform and  Raiffeisen Bank in Czechia.

Has Google’s AI watermarking system been reverse-engineered?

A software developer claims to have reverse-engineered Google DeepMind’s SynthID system, showing how AI watermarks can be stripped from generated images or manually inserted into other works. A claim that, according to Google, isn’t true. The developer, going by the username Aloshdenny, has open-sourced their work on GitHub and documented his process, claiming all it […]

The Best AI-Driven Market Intelligence Platforms for Institutional Investors

This article explores the leading AI-driven market intelligence platforms transforming how institutional investors analyse and act on real-time information. It highlights providers like Permutable AI, RavenPack, and Accern, explaining their strengths and use cases. Aimed at hedge funds, asset managers, and banks, it shows how to build a modern intelligence stack…
Read more »

The post The Best AI-Driven Market Intelligence Platforms for Institutional Investors appeared first on Big Data Analytics News.