Australian regulators come together on privacy, online safety

Australian regulators come together on privacy, online safety
The relationship between various regulatory bodies across the privacy and online safety spectrum can be difficult to parse. Australia’s two major digital regulators, eSafety and the Office of the Australian Information Commissioner (OAIC), are simplifying things by signing a memorandum of understanding (MoU) on working together to protect privacy and safety online.

The MoU aims to “guide and facilitate the parties’ collaboration, cooperation and mutual assistance in the performance of their respective statutory functions, and provide transparency about the parties’ efforts to coordinate activities and minimize duplication.” Under the terms, the parties will designate liaison contact officers to facilitate communication and exchange of information.

Generally, the document is a promise to work together in harmony on issues pertaining to the Privacy Act, the Online Safety Act, and the topics they address – including biometric data collection and age assurance requirements under the Social Media Minimum Age obligation.

“Both regulators have always recognized that combating certain harms requires privacy and safety to go hand in hand,” says esSafety Commissioner Julie Inman Grant. “For example, at eSafety we knew from the outset our implementation of the Social Media Minimum Age would need to recognize important rights, including the right to privacy. Our commitment to continue working collaboratively with the OAIC gives formal recognition to that principle and sets out how we will balance and promote privacy and safety for everyone.”

Inman Grant says the collaboration is timely, given new risks emerging with large language models (LLMs) and other AI technologies.

Australian Information Commissioner Elizabeth Tydd says that, with the MoU, “we’re not only formalizing cooperation, but building a foundation where privacy protections and online safety initiatives can better address specific harms side by side, ensuring Australians can be protected when interacting online.”

Four gaming platforms get transparency notices from eSafety

High on the list of issues for the newly-paired agencies to address is the problem of grooming, sexual exploitation and radicalization on online gaming platforms. A release from eSafety says it has handed “legally enforceable transparency notices” to Roblox, Minecraft, Fortnite and Steam, “amid concerns online games are being used by sexual predators to groom children and by extremist groups to spread violent propaganda and radicalize young people.”

Most Australian kids use one or more of these platforms. According to research by eSafety, around 9 in 10 children aged 8 to 17 in Australia play or have played online games. As such, the commissioner wants to know what these platforms are doing to identify and prevent harms, and asks how their systems, staffing and design choices are aligned with the Australian Government’s Basic Online Safety Expectations.

“Gaming platforms are amongst the online spaces most heavily used by Australian children, functioning not only as places to play, but also as places to socialize and communicate,” says eSafety head Julie Inman Grant. “Predatory adults know this and target children through grooming or embedding terrorist and violent extremist narratives in gameplay, increasing the risks of contact offending, radicalization and other off-platform harms.”

Because these platforms allow users to craft and share their own games, content can be created to normalize atrocities: for instance, gamifying the operation of a concentration camp, or the January 6 Capitol Building riot in the U.S.

“We’ve seen numerous media reports about grooming taking place on all four of these platforms as well as terrorist and violent extremist-themed gameplay. This includes Islamic State-inspired games and recreations of mass shootings on Roblox, as well as far right groups recreating fascist imagery in Minecraft.”

“These companies must take meaningful steps to prevent their services becoming onramps to abuse, extremist violence, radicalization or lifelong harm.” Per the release, a breach of a direction to comply with a code or standard can result in penalties of up to $49.5 million (roughly US$35.5 million) per breach, and failing to respond to a transparency reporting notice can lead to penalties of up to $825,000 (about US$590,000) a day.

Of the four platforms in eSafety’s sights, Roblox has gotten the worst press and the most legal scrutiny. This week, it agreed to pay a combined US$35.8 million to settle child online safety cases with the attorneys general of Nevada, Alabama and West Virginia.

It also has Australia’s attention. Under the Online Safety Codes and Standards, Roblox “committed to make a number of key changes earlier this year to protect children including more stringent age assurance, making accounts belonging to under 16s private by default, and introducing tools to prevent adult users from contacting under 16s without parental consent.” Testing on the implementation of these commitments will “validate their effectiveness.”

Canadian government worried Roblox is radicalizing youth 

Roblox recently launched new age-tiered accounts, and has regularly pledged to be a leader on online safety. Despite its efforts, concerns continue to rise over how adults are using gaming sites to lure children. The Logic has a report on a Public Safety Canada brief obtained through a freedom of information request, which singles out Roblox for being “of particular relevance as an entry point where vulnerable children and youth are targeted by malicious actors.”

Its unique combination of social interaction, user generated content and young user base mean “Roblox may impact youth radicalization in unexpected ways.”

Canada is considering a social media age restriction and attendant age verification rules similar to Australia’s. Culture Minister Marc Miller, who is expected to table online safety legislation this year, says “the gaming industry is different than other platforms, and the more that they become sort of social media-ish, the more they expose themselves to responsibility and potentially regulation.”

Japan moves toward age verification for social media filters and risk labels

Japan moves toward age verification for social media filters and risk labels
Japan’s policymakers are considering their own version of age assurance for social media with content filtering taking the limelight.

Nikkei Asia reports that Japan is considering age-based content filtering by default for social media companies to tackle addiction among minors. Japan is also thinking of creating a system to measure the risks of each platform.

While most companies often have filtering turned off by default, the Ministry of Internal Affairs and Communications wants social media providers to turn age-based filtering on from the start. The age ranges have yet to be established.

The ministry is considering age verification systems developed jointly with mobile carriers along with operating system providers. Mobile carriers in Japan already confirm customer identities when devices are purchased.

Under current Japanese law social media companies are only required to make an effort to promote appropriate use by minors, and the measures they adopt vary widely. Parents and guardians are also free to turn off filtering tools prompting doubts about the effectiveness of the existing framework.

In addition, the government is preparing a new evaluation system that would rate social media platforms based on risks such as excessive use or exposure to harmful content. The ratings would highlight features like content filters, ad‑display restrictions and time‑limit settings. This would enable users to quickly understand the risk profile of each service.

These proposals were presented at a meeting of an expert panel chosen by the communications ministry today with a final report expected as early as next month. Any resulting guidelines or legal revisions would then be developed by the relevant agencies, led by the Children and Families Agency.

In Asia, Indonesia banned social media for under 16s with the Communication and Digital Affairs Minister Meutya Hafid saying the regulation applies to around 70 million minors and framing it as a way to “reclaim the sovereignty” of children’s future.

Malaysia is preparing its own “digital seatbelt” in regulating social media for 2026 that could see identity verification technology and also MyDigital ID integration. This combination could result in the most rigorous checks on access to social media platforms for under 16s in the world.

Age assurance is dawning for social media with the light of civil liability potentially bathing this digital realm even if regulation stalls. It comes following millions of dollars worth of damages awarded by juries to plaintiffs in state trials in the U.S. The likes of Meta and YouTube were on trial for alleged addictive features such as infinite scrolling and algorithmic amplification.

Greece exempts Britons from EES biometric registration for summer

Greece exempts Britons from EES biometric registration for summer
Alarm at severe delays caused by the EES rollout in European airports may have dampened British enthusiasm for the traditional summer getaway, but Athens is looking to get Britons back on board.

Greece has just announced biometric exemptions for British visitors. This means they will not be required to provide fingerprint and face biometrics. In 2025 nearly five million Brits headed to the Mediterranean country, representing a huge contributor to Greece’s tourism industry.

Eleni Skarvedi, director of the Greek National Tourism Organization in the UK, told The Independent that the move is intended to ensure a “smoother and more efficient travel experience in Greece.”

“Practically, this means that the entry process in place before the implementation of the EES will remain unchanged,” she said to the newspaper.

This means Britons will need to have their passports manually checked and stamped while personal data is “skimmed” and logged. EES kiosks at Athens airport will remain open to other third-country nationals, but won’t be open to British travelers.

The Greek embassy in London made it clear that British passport holders are excluded from biometric registration at all Greek border crossing points. However, Brussels has taken a somewhat dim view of Athen’s decision.

A spokesperson for the European Commission said contact has been made with Greek authorities to “receive clarifications” and that the legal framework underpinning EES “does not foresee blanket exemption” for specific third-country nationals and for an extended duration.

Generally, however, the EES does allow flexibility for the registration of biometrics with suspension of collection possible at specific borders and for a limited duration in cases of “exceptional circumstances that lead to excessive waiting times,” the spokesperson said, quoted in The Independent.

The decision by Greece to suspend the EES system across its border checkpoints just for British visitors and for the whole of summer may cause tension with the EU. Brussels’ concern may grow if other countries that are popular destinations for Britons, such as Spain and Portugal, decide to follow suit. While there is no indication of that just yet, France has already bent the rules by allowing visitors in cars to forgo EES at French checkpoints at the Port of Dover in southeast England.

The first week of Europe’s EES was marred by delays even as there was widespread suspension of the biometrics enrollment that forms the foundation of the system. Airports and airlines called for more flexible implementation rules in response, but appeared in some cases to have botched staffing and organization.

Germany proposes law to ban sexualized deepfakes after scandal

Germany proposes law to ban sexualized deepfakes after scandal
A deepfake pornography scandal involving popular German actresses and TV presenter Collien Fernandes may lead to new legislation against digital violence that aims to prohibit the creation of non-consensual synthetic nudes.

German Federal Justice Minister Stefanie Hubig presented a draft law last week that tackles a number of actions that can be classified as “digital violence,” including the creation and distribution of intimate images and videos as well as cyberstalking.

“It is not the victims who should be silenced, but the perpetrators – and digital violence must finally be consistently punished,” says Hubig.

The German Ministry of Justice has proposed changes to the German Criminal Code (StGB) punishing the dissemination of computer-generated content that can “significantly damage the reputation of a living or deceased person” with up to two years in prison. In cases involving images of children or adolescents, harsher penalties may apply, Heise reports.

Germany was rocked last month after Fernandes accused her husband, actor and TV host Christian Ulmen, of sending fake, sexualized images to various men. Ulmen was also accused of creating a manipulated version of her voice to have explicit phone conversations, leading around 30 men to believe they were having an online relationship with her, according to AFP.

“I expect harsher penalties in Germany to make it clear to perpetrators that this is unacceptable!” says Fernandes.

Fernandes’s attorney has described the violation as a “digital Pelicot case,” referencing French rape survivor Gisele Pelicot, who was drugged by her husband and assaulted by unknown men while unconscious.

The TV presenter received support from a group of 250 women from politics, business and culture, which published 10 “demands,” including criminalizing non-consensual sexualised deepfakes.

The German government has advocated tougher action against AI-generated nudes even before the Fernandes case. Minister Hubig described the proliferation of sexualized nudes on social platform X as “appalling” and called for greater accountability of tech platforms.

Harvard law professor Rebecca Tushnet, however, says that it would be hard to put the genie back into the bottle. While large companies such as Anthropic and OpenAI have put guardrails against sexualized deepfakes, their creation is still possible through numerous “nudify” apps, the co-director of the Berkman Klein Center for Internet and Society told NPR.

“My honest guess is that what we will continue to have is a situation where the well-capitalized, you know, prominent ones – possibly with the exception of Grok – are built with guardrails that prevent this kind of thing, or at least try very hard to prevent this kind of thing,” says Tushnet. “But there’s a sort of little substrate of scammy (ph) little apps that offer and sometimes deliver the ability to do this for people who are willing to go looking.”

The U.S. Congress passed the TAKE IT DOWN Act in 2025, requiring platforms to remove offensive, deepfake content. But this doesn’t prevent such images from being reposted elsewhere or slightly tweaked and reposted, adds Tushnet.

In March, EU lawmakers voted on banning AI systems generating sexualized deepfakes after a backlash from European governments over millions of sexualised deepfakes produced by X’s AI chatbot Grok.

The owner of X, Elon Musk, was summoned by French prosecutors as part of an investigation into Grok and other issues related to the platform, including interference in domestic politics. The tech entrepreneur did not appear at a voluntary interview on Monday.

Musk has also been placed under investigation by the European Commission over Grok’s production of sexualized images.

FBI report reveals cybercrime losses hit $20B high with phishing, spoofing dominant

FBI report reveals cybercrime losses hit B high with phishing, spoofing dominant
Cybercrime losses have risen significantly, surpassing $20 billion, while phishing and spoofing is the dominant cyber-enabled fraud activity, reports the FBI in its annual cybercrime report.

The FBI’s Internet Crime Report 2025, compiled from the FBI Internet Crime Complaint Center (IC3), reports that losses have climbed 26 percent from 2024, to reach a total of $20.88 billion in losses. The average loss was $20,699.

The age demographic of over 60s suffered the worst, far in front with losses of $7.75 billion and 201,266 complaints. The demographic just below — the 50 to 59 group — suffered the second-most with $3.68 billion in losses and 124,820 complaints.

Combined, these two demographics (50 to 60-plus) accounted for more than half of all losses in 2025. Phishing and spoofing is the most common complaint category, with 191,561 reports. Extortion followed with 89,129 complaints.

Identity theft and impersonation are among the financially damaging schemes recorded, with the former accounting for $185.8 million in losses, while government impersonation scams resulted in $797.9 million in losses. The most damaging crime types were investment, business email compromise, tech and customer support, personal data breach, and confidence or romance scams. Investment fraud accounted for $8.65 billion in losses.

The FBI notes that “cyber‑enabled fraud” now represents nearly 85 percent of all losses reported to IC3, and 45 percent of all complaints, revealing its devastating nature. This is where criminals use the Internet or other technology to commit fraud and which involves the theft of money, data, identities, or the creation of counterfeit goods or services.

As identity‑centric attacks grow more sophisticated, the FBI is urging organizations to strengthen authentication and access controls. Recommended practices include eliminating default passwords and credentials when installing software, and requiring all accounts with password logins to comply with NIST standards.

Another recommendation to protect against ransomware is to enable multi‑factor authentication across systems such as webmail, VPNs and administrative accounts.

Voice impersonation a systemic challenge for healthcare

Jason Barr argues that the FBI’s IC3 report reveals how cybercrime has shifted. The VP of healthcare for Pindrop sees the growth in social engineering tactics, real-time deception, and AI-enabled impersonations as part of a pattern.

“Many of the highest-loss categories appear to involve some form of human interaction — conversations, not just code,” he writes on the Pindop blog.

“To me, that suggests a meaningful shift in the threat model. Security is no longer defined solely at login. It’s being tested in real time, at the moment of interaction.”

The result is that identity verification is no longer something enterprises can verify only at login, it must be continuously assessed during the interaction itself, with biometrics such as voice and behavior and with device intelligence. Continuously assessing authenticity could combat the threat of genAI and injection attacks.

Barr believes this shift has serious implications for healthcare, which relies heavily on phone‑based workflows. These voice channels are the hub of sensitive operations and lead to Protected Health Information (PHI), benefits and internal systems. But they remain some of the least protected.

Healthcare identity is also complex with patients, caregivers, providers and staff often acting on behalf of others. This complexity is exacerbated by fragmented systems, Barr argues, creating ambiguity that traditional IAM tools struggle with.

Authentication methods such as knowledge‑based questions, one‑time passwords and agent judgement have become increasingly fragile in an AI‑driven threat landscape. Synthetic voices, stolen data and automated impersonation tools now make it far easier to bypass these controls.

The pace of growth for AI-voice-cloning is such that it drew congressional scrutiny in the U.S. New Hampshire senator Maggie Hassan last week pressed four major companies to explain what they are doing to stop scammers from turning synthetic speech tools into engines of fraud. Hassan asked ElevenLabs, LOVO, Speechify and VEED for detailed answers about what they are doing to prevent bad actors from using their services.

Meanwhile, Barr notes that attackers are using AI to erode trust in the voice channel itself. Synthetic callers can convincingly mimic real people, probe authentication flows and launch targeted impersonation attempts at scale.

For healthcare, Barr concludes, the inability to verify who or what is on the other end of a call  represents systemic exposure, with direct implications for PHI breaches, account takeover, fraudulent claims and downstream attacks such as ransomware.