Category: Crime
Germany proposes law to ban sexualized deepfakes after scandal

A deepfake pornography scandal involving popular German actresses and TV presenter Collien Fernandes may lead to new legislation against digital violence that aims to prohibit the creation of non-consensual synthetic nudes.
German Federal Justice Minister Stefanie Hubig presented a draft law last week that tackles a number of actions that can be classified as “digital violence,” including the creation and distribution of intimate images and videos as well as cyberstalking.
“It is not the victims who should be silenced, but the perpetrators – and digital violence must finally be consistently punished,” says Hubig.
The German Ministry of Justice has proposed changes to the German Criminal Code (StGB) punishing the dissemination of computer-generated content that can “significantly damage the reputation of a living or deceased person” with up to two years in prison. In cases involving images of children or adolescents, harsher penalties may apply, Heise reports.
Germany was rocked last month after Fernandes accused her husband, actor and TV host Christian Ulmen, of sending fake, sexualized images to various men. Ulmen was also accused of creating a manipulated version of her voice to have explicit phone conversations, leading around 30 men to believe they were having an online relationship with her, according to AFP.
“I expect harsher penalties in Germany to make it clear to perpetrators that this is unacceptable!” says Fernandes.
Fernandes’s attorney has described the violation as a “digital Pelicot case,” referencing French rape survivor Gisele Pelicot, who was drugged by her husband and assaulted by unknown men while unconscious.
The TV presenter received support from a group of 250 women from politics, business and culture, which published 10 “demands,” including criminalizing non-consensual sexualised deepfakes.
The German government has advocated tougher action against AI-generated nudes even before the Fernandes case. Minister Hubig described the proliferation of sexualized nudes on social platform X as “appalling” and called for greater accountability of tech platforms.
Harvard law professor Rebecca Tushnet, however, says that it would be hard to put the genie back into the bottle. While large companies such as Anthropic and OpenAI have put guardrails against sexualized deepfakes, their creation is still possible through numerous “nudify” apps, the co-director of the Berkman Klein Center for Internet and Society told NPR.
“My honest guess is that what we will continue to have is a situation where the well-capitalized, you know, prominent ones – possibly with the exception of Grok – are built with guardrails that prevent this kind of thing, or at least try very hard to prevent this kind of thing,” says Tushnet. “But there’s a sort of little substrate of scammy (ph) little apps that offer and sometimes deliver the ability to do this for people who are willing to go looking.”
The U.S. Congress passed the TAKE IT DOWN Act in 2025, requiring platforms to remove offensive, deepfake content. But this doesn’t prevent such images from being reposted elsewhere or slightly tweaked and reposted, adds Tushnet.
In March, EU lawmakers voted on banning AI systems generating sexualized deepfakes after a backlash from European governments over millions of sexualised deepfakes produced by X’s AI chatbot Grok.
The owner of X, Elon Musk, was summoned by French prosecutors as part of an investigation into Grok and other issues related to the platform, including interference in domestic politics. The tech entrepreneur did not appear at a voluntary interview on Monday.
Musk has also been placed under investigation by the European Commission over Grok’s production of sexualized images.
FBI report reveals cybercrime losses hit $20B high with phishing, spoofing dominant

Cybercrime losses have risen significantly, surpassing $20 billion, while phishing and spoofing is the dominant cyber-enabled fraud activity, reports the FBI in its annual cybercrime report.
The FBI’s Internet Crime Report 2025, compiled from the FBI Internet Crime Complaint Center (IC3), reports that losses have climbed 26 percent from 2024, to reach a total of $20.88 billion in losses. The average loss was $20,699.
The age demographic of over 60s suffered the worst, far in front with losses of $7.75 billion and 201,266 complaints. The demographic just below — the 50 to 59 group — suffered the second-most with $3.68 billion in losses and 124,820 complaints.
Combined, these two demographics (50 to 60-plus) accounted for more than half of all losses in 2025. Phishing and spoofing is the most common complaint category, with 191,561 reports. Extortion followed with 89,129 complaints.
Identity theft and impersonation are among the financially damaging schemes recorded, with the former accounting for $185.8 million in losses, while government impersonation scams resulted in $797.9 million in losses. The most damaging crime types were investment, business email compromise, tech and customer support, personal data breach, and confidence or romance scams. Investment fraud accounted for $8.65 billion in losses.
The FBI notes that “cyber‑enabled fraud” now represents nearly 85 percent of all losses reported to IC3, and 45 percent of all complaints, revealing its devastating nature. This is where criminals use the Internet or other technology to commit fraud and which involves the theft of money, data, identities, or the creation of counterfeit goods or services.
As identity‑centric attacks grow more sophisticated, the FBI is urging organizations to strengthen authentication and access controls. Recommended practices include eliminating default passwords and credentials when installing software, and requiring all accounts with password logins to comply with NIST standards.
Another recommendation to protect against ransomware is to enable multi‑factor authentication across systems such as webmail, VPNs and administrative accounts.
Voice impersonation a systemic challenge for healthcare
Jason Barr argues that the FBI’s IC3 report reveals how cybercrime has shifted. The VP of healthcare for Pindrop sees the growth in social engineering tactics, real-time deception, and AI-enabled impersonations as part of a pattern.
“Many of the highest-loss categories appear to involve some form of human interaction — conversations, not just code,” he writes on the Pindop blog.
“To me, that suggests a meaningful shift in the threat model. Security is no longer defined solely at login. It’s being tested in real time, at the moment of interaction.”
The result is that identity verification is no longer something enterprises can verify only at login, it must be continuously assessed during the interaction itself, with biometrics such as voice and behavior and with device intelligence. Continuously assessing authenticity could combat the threat of genAI and injection attacks.
Barr believes this shift has serious implications for healthcare, which relies heavily on phone‑based workflows. These voice channels are the hub of sensitive operations and lead to Protected Health Information (PHI), benefits and internal systems. But they remain some of the least protected.
Healthcare identity is also complex with patients, caregivers, providers and staff often acting on behalf of others. This complexity is exacerbated by fragmented systems, Barr argues, creating ambiguity that traditional IAM tools struggle with.
Authentication methods such as knowledge‑based questions, one‑time passwords and agent judgement have become increasingly fragile in an AI‑driven threat landscape. Synthetic voices, stolen data and automated impersonation tools now make it far easier to bypass these controls.
The pace of growth for AI-voice-cloning is such that it drew congressional scrutiny in the U.S. New Hampshire senator Maggie Hassan last week pressed four major companies to explain what they are doing to stop scammers from turning synthetic speech tools into engines of fraud. Hassan asked ElevenLabs, LOVO, Speechify and VEED for detailed answers about what they are doing to prevent bad actors from using their services.
Meanwhile, Barr notes that attackers are using AI to erode trust in the voice channel itself. Synthetic callers can convincingly mimic real people, probe authentication flows and launch targeted impersonation attempts at scale.
For healthcare, Barr concludes, the inability to verify who or what is on the other end of a call represents systemic exposure, with direct implications for PHI breaches, account takeover, fraudulent claims and downstream attacks such as ransomware.
Child Pregnancies Surge in Gaza Amid Reports of Hamas Fighters Demanding Sex From ‘Wives of Martyrs’ for Food
The sexual depravity that Hamas proudly broadcast to the world during its Oct. 7, 2023, rampage across southern Israel has…
Cloud development platform Vercel was hacked
Vercel, a major development platform that hosts and deploys web apps, was compromised, and the hackers are attempting to sell stolen data. A person claiming to be a member of ShinyHunters, which was behind the recent hack of Rockstar Games, posted some data online, including employee names, email addresses, and activity time stamps. Vercel confirmed […]
The attacks on Sam Altman are a warning for the AI world
Before allegedly throwing a Molotov cocktail at OpenAI CEO Sam Altman’s home, the 20-year-old accused attacker wrote about his fear that the AI race would cause humans to go extinct, The San Francisco Chronicle found. Two days later, Altman’s home appeared to be targeted a second time, according to The San Francisco Standard. Only a […]
Newly-Elected Hungarian PM Says Orbán Was Paying CPAC, Calls It a ‘Crime’ That ‘Will Have to Be Investigated’
Péter Magyar, the Hungarian Prime Minister-Elect, announced that outgoing Prime Minister Viktor Orbán’s government had been paying CPAC, but he would put an immediate end to that — and even went so far as to call it a “crime” that needed to be investigated.
The post Newly-Elected Hungarian PM Says Orbán Was Paying CPAC, Calls It a ‘Crime’ That ‘Will Have to Be Investigated’ first appeared on Mediaite.

































