Australian regulators come together on privacy, online safety

Australian regulators come together on privacy, online safety
The relationship between various regulatory bodies across the privacy and online safety spectrum can be difficult to parse. Australia’s two major digital regulators, eSafety and the Office of the Australian Information Commissioner (OAIC), are simplifying things by signing a memorandum of understanding (MoU) on working together to protect privacy and safety online.

The MoU aims to “guide and facilitate the parties’ collaboration, cooperation and mutual assistance in the performance of their respective statutory functions, and provide transparency about the parties’ efforts to coordinate activities and minimize duplication.” Under the terms, the parties will designate liaison contact officers to facilitate communication and exchange of information.

Generally, the document is a promise to work together in harmony on issues pertaining to the Privacy Act, the Online Safety Act, and the topics they address – including biometric data collection and age assurance requirements under the Social Media Minimum Age obligation.

“Both regulators have always recognized that combating certain harms requires privacy and safety to go hand in hand,” says esSafety Commissioner Julie Inman Grant. “For example, at eSafety we knew from the outset our implementation of the Social Media Minimum Age would need to recognize important rights, including the right to privacy. Our commitment to continue working collaboratively with the OAIC gives formal recognition to that principle and sets out how we will balance and promote privacy and safety for everyone.”

Inman Grant says the collaboration is timely, given new risks emerging with large language models (LLMs) and other AI technologies.

Australian Information Commissioner Elizabeth Tydd says that, with the MoU, “we’re not only formalizing cooperation, but building a foundation where privacy protections and online safety initiatives can better address specific harms side by side, ensuring Australians can be protected when interacting online.”

Four gaming platforms get transparency notices from eSafety

High on the list of issues for the newly-paired agencies to address is the problem of grooming, sexual exploitation and radicalization on online gaming platforms. A release from eSafety says it has handed “legally enforceable transparency notices” to Roblox, Minecraft, Fortnite and Steam, “amid concerns online games are being used by sexual predators to groom children and by extremist groups to spread violent propaganda and radicalize young people.”

Most Australian kids use one or more of these platforms. According to research by eSafety, around 9 in 10 children aged 8 to 17 in Australia play or have played online games. As such, the commissioner wants to know what these platforms are doing to identify and prevent harms, and asks how their systems, staffing and design choices are aligned with the Australian Government’s Basic Online Safety Expectations.

“Gaming platforms are amongst the online spaces most heavily used by Australian children, functioning not only as places to play, but also as places to socialize and communicate,” says eSafety head Julie Inman Grant. “Predatory adults know this and target children through grooming or embedding terrorist and violent extremist narratives in gameplay, increasing the risks of contact offending, radicalization and other off-platform harms.”

Because these platforms allow users to craft and share their own games, content can be created to normalize atrocities: for instance, gamifying the operation of a concentration camp, or the January 6 Capitol Building riot in the U.S.

“We’ve seen numerous media reports about grooming taking place on all four of these platforms as well as terrorist and violent extremist-themed gameplay. This includes Islamic State-inspired games and recreations of mass shootings on Roblox, as well as far right groups recreating fascist imagery in Minecraft.”

“These companies must take meaningful steps to prevent their services becoming onramps to abuse, extremist violence, radicalization or lifelong harm.” Per the release, a breach of a direction to comply with a code or standard can result in penalties of up to $49.5 million (roughly US$35.5 million) per breach, and failing to respond to a transparency reporting notice can lead to penalties of up to $825,000 (about US$590,000) a day.

Of the four platforms in eSafety’s sights, Roblox has gotten the worst press and the most legal scrutiny. This week, it agreed to pay a combined US$35.8 million to settle child online safety cases with the attorneys general of Nevada, Alabama and West Virginia.

It also has Australia’s attention. Under the Online Safety Codes and Standards, Roblox “committed to make a number of key changes earlier this year to protect children including more stringent age assurance, making accounts belonging to under 16s private by default, and introducing tools to prevent adult users from contacting under 16s without parental consent.” Testing on the implementation of these commitments will “validate their effectiveness.”

Canadian government worried Roblox is radicalizing youth 

Roblox recently launched new age-tiered accounts, and has regularly pledged to be a leader on online safety. Despite its efforts, concerns continue to rise over how adults are using gaming sites to lure children. The Logic has a report on a Public Safety Canada brief obtained through a freedom of information request, which singles out Roblox for being “of particular relevance as an entry point where vulnerable children and youth are targeted by malicious actors.”

Its unique combination of social interaction, user generated content and young user base mean “Roblox may impact youth radicalization in unexpected ways.”

Canada is considering a social media age restriction and attendant age verification rules similar to Australia’s. Culture Minister Marc Miller, who is expected to table online safety legislation this year, says “the gaming industry is different than other platforms, and the more that they become sort of social media-ish, the more they expose themselves to responsibility and potentially regulation.”

Switzerland opens Swiyu bug bounty program to public

Switzerland opens Swiyu bug bounty program to public
Switzerland has opened the bug bounty program for its upcoming digital identity wallet Swiyu to the public.

The bug bounty program has been run since July 2025 by the Federal Office for Cyber Security (BACS) in collaboration with security testing platform Bug Bounty Switzerland. Until now, the program has been run in private mode, meaning that only selected hackers could participate.

So far, 12 vulnerabilities have been identified, with results published on GitHub each quarter.

By opening the program to other hackers, the Swiss government wants to strengthen trust in the e-ID and the Swiyu trust infrastructure, according to its announcement.

The main focus areas for the testing include the Swiyu Public Beta trust infrastructure with Android and iOS apps, generic components and registries, as well as the implementation of open standards such as Verifiable Credentials, Decentralized Identifiers and OpenID for the issuance and verification of digital credentials. Applications are available at this link.

The Swiyu app is already available to citizens as a beta version, with plans to become a full-scale testing environment later in the year. The digital ID wallet stores users’ national eIDs while allowing them to control their data.

​The technical implementation plan was formed in 2024 by the Swiss government, which has allocated 100 million francs (US$113.3 million) for the project.

Voice morphing attack blends identities to bypass voice biometrics: study

Voice morphing attack blends identities to bypass voice biometrics: study
A new research paper explores a signal-level approach to voice morphing attacks that exposes vulnerabilities in biometric voice recognition systems.

The abstract describes Time-domain Voice Identity Morphing (TD-VIM) as “a novel approach for voice-based biometric morphing” which “enables the blending of voice characteristics from two distinct identities at the signal level.” TD-VIM allows for seamless voice morphing directly in the time domain, allowing “identity blending without any embeddings from the backbone, or reference text.”

“In biometric systems, it is a common practice to associate each sample or template with a specific individual,” the authors say. Advanced Voice Identity Morphing (VIM) enables the generation of a sample that blends the identities of two or more speakers. “The morphed voice sample can be used to match all identities whose voice samples are employed to generate morphing attacks, thus posing a high risk to application scenarios, such as banking and finance, where single identity verification is essential.”

To explore the problem, the research team “created four distinct morphed signals based on morphing factors and evaluated their effectiveness using a comprehensive vulnerability analysis.” Data was benchmarked against the Generalized Morphing Attack Potential (G-MAP) metric, “measuring attack success across two deep-learning-based Speaker Verification Systems (SVS) and one commercial system, Verispeak.”

“Our targeted analysis on Verispeak highlights TD-VIM’s success rate in challenging advanced SVS defenses,” says the conclusion. “The findings underscore TD-VIM’s potential to bypass sophisticated verification measures, emphasizing the importance of enhancing SVS security.”

The research comes out of the Indian Institute of Technology and the Norwegian University of Science and Technology (NTNU).

Many smartphones don’t detect face biometrics spoofs or properly warn consumers

Many smartphones don’t detect face biometrics spoofs or properly warn consumers
Biometric liveness detection remains a significant “flaw” and a “vulnerability” of most Android smartphones with facial unlocking. Most are still prone to simplistic and low-cost spoofs available to inexpert attackers, according to an analysis by Which?.

The publication notes that iPhones are generally immune to spoofs with printed 2D photos, due to the depth-sensing capability of Face ID. Some newer Google Pixel devices were also not fooled by flat images in Which? testing.

The convenience factor of native device face biometrics is identified as such sometimes, and Which? acknowledges that “some manufacturers have made strides in providing clearer warnings during setup.”

Yet many Android smartphones do not, it says, including models from OnePlus and Motorola. OnePlus did just release a new phone with in-display 3D ultrasonic fingerprint biometrics from Qualcomm.

Which? labs has tested 208 phones since October of 2022, and found 2D printed photos were good enough spoofs to fool the face biometric unlock systems of 133 devices, or 64 percent of them.

Testing during 2025 revealed a 13 percent improvement, year-over-year, after a brutal 2024 in which the share of spoof-prone devices rose dramatically.

Samsung’s Galaxy S26 has adequate biometric presentation attack detection (PAD), Which? says, but previous models including the Galaxy S25 do not. At least the manufacturer properly warns consumers that its facial recognition is a convenience feature, rather than a high-security one.

While banking apps and digital wallets no longer accept 2D Android face biometrics as a secure authentication factor, Which? warns that users relying on face biometrics to unlock their phone risk a phone thief with their photo reading their text messages, sending emails from their account, which could allow them to reset passwords for other services, access photos and other sensitive documents and view additional information like wallet history and partial payment card information.

The publication advises all smartphone users to unlock their phones with a PIN or fingerprint biometrics. A complex PIN or password provides the “highest” security level, it says. Patterns provide the lowest, Which? says, because they are easily shoulder-surfed. Shoulder surfing is not mentioned in the password guidance.

Which? will also avoid giving “Best Buy” or “Great Value” recommendations to phones that do not adequately inform users about the limits of their face biometrics capabilities.

As for those apps that do recognize a difference between on-device convenience authentication factors and higher-security biometrics, hopefully they have strong injection attack detection (IAD).

America’s ‘Laser Dome’ starts here

The U.S.military is paving the way for the regular deployment of high-energy laser weapons on American soil for air defense amid the expanding threat of low-cost weaponized drones.

The Federal Aviation Administration and the U.S.Defense Department have reached a “landmark safety agreement” regarding the use of laser weapons to counter unauthorized drones at the U.S.-Mexico border following a safety assessment that concluded such countermeasures “do not pose undue risk to passenger aircraft,” the FAA announced on April 10.

The assessment and resulting agreement were the direct result of two laser incidents along the southern border of Texas in February, which prompted the FAA to abruptly close nearby airspace amid concerns over the potential impact on civilian air traffic. The incidents involved the U.S.Army’s 20 kilowatt Army Multi-Purpose High Energy Laser (AMP-HEL), a vehicle-mounted version of defense contractor AV’s LOCUST Laser Weapon System.

In the first incident, U.S.Customs and Border Patrol personnel used an AMP-HEL on loan from the Pentagon to engage an unidentified target near Fort Bliss, triggering an airspace shutdown above El Paso on February 11. In the second, U.S.military personnel used an AMP-HEL near Fort Hancock to neutralize a “seemingly threatening” drone that turned out to belong to CBP, spurring another shutdown on February 27.

“Following a thorough, data-informed Safety Risk Assessment, we determined that these systems do not present an increased risk to the flying public,” FAA administrator Bryan Bedford said in a statement. “We will continue working with our interagency partners to ensure the National Airspace System remains safe while addressing emerging drone threats.”

The “first of its kind” safety assessment, conducted in early March by the FAA and the Pentagon’s Joint Interagency Task Force 401 (JIATF-401) counter-drone organization at White Sands Missile Range in New Mexico, reportedly yielded two significant conclusions: 1) the LOCUST’s automatic shutoff mechanism will consistently prohibit the system from firing under unsafe circumstances, a point that AV executives have emphasized in recent weeks, and 2) in the event of a system failure, the laser beam itself cannot inflict catastrophic damage even on aircraft flying at its maximum effective range, let alone those at cruising altitudes.

Here’s how Aaron Westman, AV senior director for business development, described the LOCUST’s safety protocols in a company blog post on March 23:

Every time an operator presses the “fire” button, the system runs through a series of automated checks. Some examples include:

  • Is the laser pointing away from protected “keep-out” zones?
  • Are all internal subsystems operating within safe parameters?
  • Is the system properly locked onto a target?
  • Are safety interlock switches engaged?
  • Are all software safety checks satisfied?

Each of these checks acts as a safety “vote.”

If any subsystem registers a “no vote,” the laser simply will not fire. An operator can press the trigger—and nothing happens. The system refuses to engage until all conditions are verified as safe.

These automated safeguards are built into both the hardware and the software of the system.

Here’s how DefenseScoop described the LOCUST’s potential effects on passing airframes based on an account from Army Col. Scott McLellan, JIATF-401 deputy director, of the testing at White Sands:

McLellan said the evaluation involved “localized” firing of the AMP-HEL from various distances at the fuselage of a Boeing 767 airliner that testers lugged on to White Sands to assess the system’s damaging effects, “or lack thereof” on aircraft material. He said it aimed to “disprove some myths” about the capability, noting “that energy clearly dissipates over time and space and doesn’t have the effect everyone thinks it does as far as lasers are concerned.”

A JIATF 401 spokesperson said the laser was fired at its “maximum effective range for up to 8 seconds” at the grounded fuselage, “demonstrating that even at full intensity, the laser caused no structural damage to the aircraft.”

As drone warfare spreads beyond distant conflicts, laser weapons are an increasingly attractive domestic countermeasure. While kinetic interceptors and electronic warfare may be considered suitable for chaotic battlefields, their potential for collateral effects makes them far too risky for consistent domestic applications. And even if collateral damage wasn’t a concern, expending expensive missiles on the 1,000 cartel-operated drones that cross the border with Mexico monthly is economically unsustainable, especially for a Pentagon that’s already rapidly burning through munitions as part of Operation Epic Fury against Iran. On paper, the argument seems obvious: why not save those critical interceptors for high-end threats overseas and let domestic laser emplacements, with their deep magazines and minimal cost-per-shot, pull counter-drone duty at home?

Using laser weapons for domestic air defense wouldn’t be unprecedented. France deployed two 2 kw High Energy Laser for Multiple Applications – Power (HELMA-P) systems to secure the airspace over the country’s Île-de-France region during the 2024 Paris Olympics and Paralympics. This past September, China’s People’s Liberation Army deployed several laser weapons across Beijing during a major military parade marking the 80th anniversary of Japan’s defeat at the end of World War II. As of January, the UK Ministry of Defense was reportedly drawing up plans to build a domestic laser screen, albeit composed of lower-power laser dazzlers, to protect military installations and other critical infrastructure. The Pentagon has even already considered laser weapons to reinforce the airspace above Secretary of Defense Pete Hegseth and Secretary of State Marco Rubio’s residences at Fort McNair in Washington DC following a series of unauthorized drone incursions there.

Indeed, there’s a distinct possibility that laser weapons could see increasing domestic applications amid the U.S.military’s growing appetite for novel drone defenses. On April 2, JIATF-401 announced that it had funneled $20 million in counter-drone systems like the Dronebuster EW handset and Smart Shooter computerized riflescope to the U.S.-Mexico border in just four months. Days later, the task force announced $100 million to enhance counter-drone capabilities for the 2026 FIFA World Cup starting in June “to protect stadiums and fan zones in 11 cities across nine states,” part of larger $600 million surge in counter-drone systems that also allocated $158 million to “defend the nation’s highest-priority defense critical infrastructure.” With the Pentagon asking for $580 million in R&D funding just for JIATF-401 in its fiscal year 2027 budget request (and potentially $800 million in procurement cash), the task force appears poised to explore any and all possible solutions to the drone problem—and operationally, the FAA-Pentagon safety agreement helps establish laser weapons as a viable option.

That said, the safety agreement on its own is unlikely to open the floodgates for a sudden spate of laser weapon deployments along the U.S.-Mexico border, let alone for major events like the World Cup or critical infrastructure just yet. First, the agreement doesn’t appear to clarify who has final say in authorizing a laser engagement when U.S.military, CBP, and FAA jurisdictions overlap—the precise ambiguity that yielded February’s airspace closures and, until resolved, will complicate future engagements during a fast-moving crisis. Second, the U.S.military’s arsenal of operational laser weapons is currently limited despite a stated goal of rapidly fielding new systems at scale within three years. Even with clear plans to surge directed energy research and development for homeland defense under President Donald Trump’s “Golden Dome for America” missile shield, the age of sleek beam directors quietly standing watch along the U.S.-Mexico border remains a long way off.

The FAA agreement may end up laying the foundation for a true domestic laser air defense architecture—a “Laser Dome” in all but name. Whether the U.S.military actually builds it, however, will depend not just the Pentagon’s promise to deploy laser weapons at scale, but whether Washington can finally sort out who’s in charge when a beam crosses into civilian airspace.

This article is republished with permission from Laser Wars, a newsletter about military laser weapons and other futuristic defense technology.