Category: Chatbots
Effectiveness of age gates for social media algorithms, chatbots next targets in UK

The algorithms that social media platforms use to determine what is presented to young UK users in their feeds will be subject to auditing under the Online Safety Act.
Ofcom Chief Executive Melanie Dawes told the Financial Times that her agency would pursue enforcement action against social media platforms that cannot prove their algorithms prevent children under 18 years old from exposure to restricted content.
The regulator has also discussed how the OSA applies to chatbots and generative AI tools.
Ofcom will look at content moderation and recommendations for sites including YouTube, Roblox and Facebook to make sure they do algorithmically deliver adult content.
OpenAI has acknowledged the applicability of the OSA to ChatGPT, Dawes says, while her comments on Grok indicate that X has not.
Lawmakers in the EU and Australia are currently grappling with whether and how to restrict social media access to young people.
In total, Ofcom currently has 69 investigations into possible violations of the OSA and its age verification rules, according to the report.
Dawes also notes a significant rollback in the use of VPNs since they surged in the wake of the OSA’s launch.
Liberal Democrat Lord Timothy Clement-Jones introduced a motion “to regret” in the upper chamber last week, arguing that it introduces “a ceiling, not a floor” on online child protection, MLex reports. He was joined by legislators from other parties.
Lawmakers argued that the codes are not specific enough to meet the differing needs of children at different ages, civil society feedback was not sufficiently acted on, that live-streaming and algorithms that promote harmful but legal content need more scrutiny.
AI chatbots face age assurance clampdown due to teen suicide, psychosis

What are nightmares made of? Halloween lore would say vampires and werewolves and witches and ghosts. But it turns out the ghoulies lurking in our homes aren’t movie monsters. In fact, they may be large language models (LLMS) deployed as AI chatbots.
As generative AI tech has swept the globe, powered by relentless evangelizing from its developers and investors, so too has a new sickness descended. AI is driving people insane: this week, OpenAI disclosed that more than a million people a week display suicidal intent when conversing with ChatGPT and that hundreds of thousands have signs of psychosis. And the tally is growing of parents who place the blame for their children’s suicides on the shoulders of chatbots.
Regulators are taking note. This month, Canada’s AI minister publicly pondered age checks for AI chatbots, and U.S. states are already flexing legislation. Now, the issue has reached America’s federal government.
Hawley bill aims to stop AI from ‘breaking’ children
U.S. legislators are moving quickly on regulations for large language model (LLM) chatbots, as the tech marketed as the future of the global economy continues to show its capacity to send people to the grave.
Missouri Senator Josh Hawley introduced S3062, “A bill to require artificial intelligence chatbots to implement age verification measures and make certain disclosures, and for other purposes,” leading a group of senators who hope to curb chatbots’ abilities to have sexually explicit conversations with children, or counsel them to kill themselves or others.
According to a report from Roll Call, the bipartisan bill would enshrine criminal penalties for companies allowing chatbots to engage in the prohibited conversations with kids, and would impose age verification requirements on so-called AI chatbots.
Lawmakers appear to have gotten wise to Silicon Valley’s tactics; having seen the damage social media has caused, they are presumably in no rush to explain to the nation’s parents why ChatGPT should be allowed to recommend the right noose for hanging oneself simply because Sam Altman wants to accrue more wealth.
“They are the richest companies in the history of the world,” says Hawley of the tech lords, “and they live by a motto that I think we’re all now all too familiar with, ‘move fast and break things.’ But increasingly, those things are our children.”
In addition to imposing potential punitive measures, the bill would require chatbots to make a series of disclosures at regular intervals during a conversation, reminding users that they are not licensed lawyers, therapists, doctors, priests or financial advisers – all of which, says Hawley, they have claimed to be at some point.
“The chatbot would then be required to disclose that it doesn’t provide medical, legal, financial or psychological services and that users should consult licensed professionals.”
If it becomes law, the bill would mark an ongoing shift in the U.S. administration’s position on AI regulation, which it earlier this year tried to abolish for a decade.
U.S. soon to find out if AI is covered by First Amendment
Connecticut Senator Richard Blumenthal, a co-sponsor of the bill, expresses anger “because we could have achieved by now some of the safeguards in this bill if Big Tech were not opposing it with armies of lawyers and lobbyists, millions and millions of dollars.”
He’s likely to get more angry: NetChoice, the largest and most active of Silicon Valley’s legal militias with backers including Meta, Google and Amazon, has already called the age verification requirement for LLM chatbots a “terrible policy and a violation of fundamental constitutional rights.” It continues to use the free speech argument based on the U.S. First Amendment as its primary weapon.
Hawley calls that argument “absurd,” noting that social media platforms have often claimed they are not publishers and therefore not legally accountable for what goes on their sites, while arguing in other cases that they have First Amendment protection for what they publish.
Character.AI to ban open-ended conversations for users under 18
In the case of LLM chatbots, the content in question is increasingly revealed to be intimate, misleading and, in some cases, fatal. To date, the most high profile case has been that of Adam Raine, the teenager whose parents say was counseled by ChatGPT to hang himself. Their lawsuit against ChatGPT creator OpenAI alleges wrongful death, design defects and failure to warn of risks associated with the LLM tool.
Although Chat is the most widely-used chatbot, it’s not the only one counseling kids to end their lives. Character.AI, a generative AI chatbot service where users can customize the characters they interact with, has been served with at least four lawsuits from parents whose kids blame the tool for encouraging their teen’s suicide.
Now, the company has announced a prohibition on open-ended chats for users under 18, set to take effect on November 25. An announcement from the company says it’s making the changes to its under-18 platform “in light of the evolving landscape around AI and teens.”
“We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly.”
According to the Guardian, an “age assurance functionality” will ensure “users receive the right experience for their age”.
66.6% of UK, US parents concerned about kids’ safety online
Parents are not just concerned – they’re calling for stronger age assurance measures. A new study from More in Common, a UK think tank devoted to tackling polarisation and division in Western societies, confirms as much. A key finding of the research, which analyzed data from conversations with almost 10,000 parents across the US, UK, France, Poland and the Netherlands, is that around nine in ten parents across countries are concerned about children’s online safety.
“For many parents that concern is particularly severe. Two thirds of British and American parents are very concerned, and half of French, Polish and Dutch parents are very concerned about children’s online safety. Most parents think that politicians and tech companies aren’t taking children’s online safety seriously enough.”
The data shows “strong support from parents for new laws and measures from tech companies to make being online safer for children – action on algorithms, better age verification on social media and new measures to detect and remove explicit images of children command strong support among parents.”
The issue is a rare bipartisan political win, spanning the full political spectrum across the globe: more than three in five parents in the EU support putting more responsibility on politicians and tech companies, while over half of American parents hold the same view.
A noteworthy finding in light of NetChoice’s endless First Amendment arguments is that most parents prioritize online safety safeguards ahead of privacy or free speech considerations. Most parents, it turns out, draw the line at robots that take away their children.






















![When Friendship Hurts: The Hidden Cost of Co-Rumination [Student Voices] When Friendship Hurts: The Hidden Cost of Co-Rumination [Student Voices]](https://i3.wp.com/ethicsandsociety.org/wp-content/uploads/2025/10/vitaly-gariev-h1fannrbf-g-unsplash.jpg?w=1024&w=150&resize=150,150&ssl=1)







