Vietnam mandates face biometrics for mobile device registration

Vietnam mandates face biometrics for mobile device registration
A facial recognition process is now required for new mobile device registrations in Vietnam.

The policy took effect April 15 under a circular issued March 31, following a draft released in January, and is intended to combat identity fraud and other illegal activities related to SIM card and mobile device ownership.

The registration process entails submitting one’s name and date of birth, as well as their national ID number and face biometrics.

Authorities say the measure, which is in line with a circular of the Ministry of Science and Technology, will be particularly useful to curb identity theft in situations where individuals lose their devices or where ownership changes without a deactivation of the previous registration.

Per the new regulation, telecoms companies have up to two hours to identify activities involving a device change and block outbound services until the owner of the new gadget completes a facial recognition-based registration process.

Once there is a block, the affected individual has up to 30 days to undertake the biometric verification to prevent a suspension of both inbound and outbound services. If the situation is not sorted out five days after the full service suspension, the telco is required to put an end to the subscription.

The new policy requires not only new SIM cards to be registered using facial recognition, but also identities linked to newly registered devices to be verified against the national population register and the resident database.

With the new measure, authorities say it is possible to conduct SIM card registration using the VNeID national digital ID application, and users can also verify the number of SIM cards linked to their ID using the platform. Meanwhile, they can complete the process either in physical offices or through other mobile applications made available by telcos.

The new measure is part of the Vietnamese government’s efforts towards combating identity theft, something which is also common with social media use. In February, measures aiming to mandate digital ID verification for social media use were announced.

Nearly 40% of Gen Z report fraud losses as scams shift online: TransUnion

Nearly 40% of Gen Z report fraud losses as scams shift online: TransUnion
Gen Z is increasingly being targeted by online scammers: Nearly 40 percent of Gen Z consumers reported losing money to digital fraud in the past year.

The reason behind this is that members of this generation are more likely to use gambling and betting platforms, social platforms such as forums and dating apps and video games – all of which are experiencing a rise in fraud attempts, according to a new report from TransUnion.

The solution to the spread of fraud is increasing identity defenses, the consumer credit agency notes in its H1 2026 Top Fraud Trends Update.

Although digital fraud rates are declining overall, more sophisticated fraud schemes, driven by the rise of generative AI and synthetic identities, are causing greater losses for consumers, according to Naureen Ali, U.S. head of fraud at TransUnion.

“Addressing this requires a new generation of identity-centric defenses that combine advanced analytics, adaptive authentication and multilayered fraud detection,” says Ali. “Organizations must match fraudsters’ technological innovation to stay ahead of rapidly changing schemes.”

The firm found that about one in six U.S. consumers lost money to scams conducted via email, phone calls, texts, or online channels in 2025, with median losses reaching US$2,307. Credit card or fraudulent charges were the leading cause of digital fraud losses, accounting for a third of cases. Identity theft followed closely at 29 percent, while account takeover (ATO) affected one in four (27 percent) victims.

TransUnion also published global fraud data, surveying a total of 24 countries. The company found that account creation has become a growing target for fraudsters across the world.

Globally in 2025, more than eight percent of account creation attempts were flagged as suspected digital fraud, an 18 percent jump from the year prior.

“Instead of bypassing controls during account use, they increasingly exploit vulnerabilities at account creation, concealing identity manipulation until losses mount,” says Ali. The solution is to detect sophisticated identity risks at onboarding, she adds.

Over a quarter of global consumers said they lost money to digital fraud, reporting a median loss of $1,671. Money mules and third-party seller scams on legitimate ecommerce sites were the leading causes of loss (24 percent), followed by voice phishing or vishing (23 percent).

The report also offers data on other types of fraud, including phishing, smishing, unemployment fraud and social engineering.

AI voice fraud draws new congressional scrutiny

AI voice fraud draws new congressional scrutiny
U.S. Sen. Maggie Hassan is escalating congressional scrutiny of the fast-growing AI voice-cloning industry, pressing four major companies to explain what they are doing to stop scammers from turning synthetic speech tools into engines of fraud.

In letters dated April 16 to ElevenLabs, LOVO, Speechify, and VEED, the New Hampshire Democrat and ranking member of the Congressional Joint Economic Committee demanded detailed answers about what they are doing to prevent bad actors from using their services.

Hassan wants to know whether the companies monitor for scam-related uses, verify that a person has consented before their voice is cloned, detect attempts to imitate public figures and minors, watermark AI-generated audio, preserve provenance information, and report bad actors to law enforcement.

The letters amount to more than another general warning about the harms of AI. They reflect a more specific congressional concern that voice models have become highly usable, widely accessible, and increasingly difficult for ordinary people to detect.

“In recent years, global criminal networks have used deepfake voice programs, along with other new AI tools, to target more people with increasingly personalized and believable digital scams, fueling a booming scam industry that surpasses the global drug trade as an illicit industry,” Hassan told the companies

“Protecting Americans from these financial losses will require collaboration between the public and private sectors, and AI companies [including yours] are on the frontlines of this effort,” Hassan added.

Hassan repeatedly frames the problem in operational terms. She is not only asking whether companies prohibit fraud in their terms of service, but whether they enforce those policies, how often they update scam phrase lists, how many violators they have caught, when they ban users, whether those users can return under new accounts, and whether law enforcement receives information that the public does not.

That focus matters because the threat is no longer hypothetical. Hassan pointed out that the Federal Bureau of Investigation’s (FBI) 2025 Internet Crime Report, released this month, shows victims lost $893 million to AI-related scams in 2025, a figure that underscores how quickly synthetic media is being absorbed into familiar fraud schemes.

Cryptocurrency and AI-related scams were among the costliest, the FBI said.

The FBI also said the Internet Crime Complaint Center received 1,008,597 total complaints, an increase from 859,532 in 2024. Phishing/spoofing, extortion, and investment schemes were the most frequently reported complaints. Americans over 60 reported approximately $7.7 billion in losses, up 37 percent from 2024.

Industry and consumer advocates have been warning about the same trend. Consumer Reports said in its March 2025 assessment of AI voice-cloning products from Descript, ElevenLabs, Lovo, PlayHT, Resemble AI, and Speechify that it “found a majority of the products assessed did not have meaningful safeguards to stop fraud or misuse of their product.”

Consumer Reports said the platforms should automatically flag and prohibit audio containing phrases commonly used in scams and other fraud, a recommendation that closely tracks the questions Hassan posed in her letters to some of the same companies.

Those letters lay out why lawmakers are alarmed. Hassan cited research finding that people are poorly equipped to identify AI-generated voice clones and notes that these systems can create convincing synthetic voices from only a brief audio sample.

Hassan highlighted how easy it has become to pick from prebuilt voice libraries or generate synthetic voices in many languages.

ElevenLabs, for example, is described as offering thousands of voices in dozens of languages; LOVO more than 500 voices in 100 languages; Speechify more than 1,000 voices in over 60 languages; and VEED more than 35 voices capable of speaking dozens of languages.

Hassan said romance scams, impersonation scams, and so-called grandparent scams have manipulated victims into believing a loved one is in danger.

She noted a 2025 case involving New Hampshire families who were allegedly tricked by an AI-generated imitation of a relative’s voice, as well as 2024 reports from Merrimack County, New Hampshire, where residents received scam calls from voices made to sound like family members or law enforcement.

Voice cloning has also been used against businesses to bypass voice-based authentication or impersonating executives to authorize transfers of large sums of money.

Another striking element of Hassan’s inquiry is how directly it targets platform design choices. Several of her questions ask whether the companies require real-time audio for verification, whether they demand authentic non-public recordings before allowing a clone, and what mechanisms they use to determine whether submitted audio is genuine.

She also wants to know whether the companies detect when users try to create “no-go” voices, such as politicians and celebrities, and whether they can tell when a user succeeds in bypassing those safeguards anyway.

Her letters also probe whether the companies permit the cloning of minors’ voices or the creation of synthetic child-like voices, and if so, what protections they have in place against exploitative misuse.

Hassan’s line of questioning dovetails with one of the central critiques of the sector: many voice-cloning products historically relied more on user promises than on meaningful technical guardrails.

Consumer Reports said most leading products it examined lacked strong technical mechanisms to stop nonconsensual voice cloning and recommended both identity-focused controls and automatic scam phrase detection.

Hassan is asking the companies whether they have gone beyond self-attestation and basic policy language to adopt the kind of systems critics say are necessary.

Hassan’s oversight push comes as Congress considers a more formal legislative answer. Senate bill S.3982, the AI Fraud Accountability Act of 2026, would establish a federal framework aimed squarely at digital impersonation fraud.

Introduced last month by Republican Sen. Tim Sheehy and Democrat Lisa Blunt Rochester and referred to the Senate Committee on Commerce, Science and Transportation, the bill would amend the Communications Act of 1934 as amended to create a criminal prohibition on using a “digital impersonation” in interstate or foreign communications with intent to defraud someone of money, documents, or anything of value.

A companion bill was introduced in the House by Republican Vern Buchanan, vice chairman of the House Committee on Ways and Means and chairman of the House Democracy Partnership, and Democratic Rep. Darren Soto.

Both bills define digital impersonation broadly to cover convincingly fabricated or altered audio or visual depictions of either an identifiable real person or even an imaginary person presented as genuine.

The bill would authorize penalties of up to three years in prison, include forfeiture provisions and establish extraterritorial federal jurisdiction, a notable provision given that many scam operations originate abroad.

“We are seeing a disturbing rise in AI-generated voice clones and deepfake videos that convincingly impersonate loved ones, business executives, government officials, and trusted institutions to steal money,” Buchanan said.

“Congress must act to stay ahead of these threats by modernizing federal law to keep up with emerging technology. The AI Fraud Accountability Act makes clear that if you use AI to defraud Americans, you will be prosecuted,” Buchanan added.

The AI Fraud Accountability Act would create a civil and regulatory enforcement route through the Federal Trade Commission (FTC). A violation would be treated as an unfair or deceptive act or practice enforceable by the FTC. The bill is structured not only to punish fraudsters after the fact, but also to make digital impersonation fraud a matter of consumer protection enforcement.

The bill also contains a standards and governance component. It would require the Secretary of Commerce, acting through the National Institute of Standards and Technology (NIST), to convene a working group within 30 days of enactment to develop best practices for recognition, detection, prevention, and tracing of digital impersonations used in fraud.

The working group would include representatives from the Department of Justice, FTC, federal, state, and local law enforcement, private sector industries such as financial services, telecommunications, health care, retail, and digital platforms, as well as scientists and engineers with expertise in digital forensics and AI.

NIST would then be required to publish best practices and update them annually.

That structure is revealing. Hassan’s letters seek data from the companies about what safeguards exist, how effective they are, and where the gaps remain. The bill, by contrast, tries to build the enforcement and technical architecture that would follow from such findings.

In that sense, Hassan’s letters and the AI Fraud Accountability Act are complementary. Hassan is gathering the kind of information Congress would need to judge whether voluntary industry practices are working. The AI Fraud Accountability Act, if law, would supply the beginnings of a statutory answer if lawmakers conclude those practices are not working.