
Peter G. Kirchschläger
ZURICH, MAY 19 (PS) – On February 28, 2024, Sewell Setzer III, a 14-year-old boy from Florida, killed himself at the urging of a lifelike AI character generated by Character.AI, a platform that is also reportedly hosting pro-anorexia AI chatbots that encourage disordered eating among young people. Clearly, stronger measures are urgently needed to protect children and young people from AI.
Of course, even in strictly ethical terms, AI has immense positive potential, from promoting human health and dignity to improving sustainability and education among marginalized populations. But these promised benefits are no excuse for downplaying or ignoring the ethical risks and real-world costs. Every violation of human rights must be seen as ethically unacceptable. If a lifelike AI chatbot provokes the death of a teenager, the fact that AI could play a role in advancing medical research is no compensation.
Nor is the Setzer tragedy an isolated case. This past December, two families in Texas filed a lawsuit against Character.AI and its financial backer, Google, alleging that the platform’s chatbots sexually and emotionally abused their school-age children, resulting in self-harm and violence.
We have seen this movie before, having already sacrificed a generation of children and teens to social-media companies that profit from their platforms’ addictiveness. Only slowly did we awaken to the social and psychological harms done by “anti-social media.” Now, many countries are banning or restricting access, and young people themselves are demanding stronger regulation.
But we cannot wait to rein in AI’s manipulative power. Owing to the huge quantities of personal data that the tech industry has harvested from us, those building platforms like Character.AI can create algorithms that know us better than we know ourselves. The potential for abuse is profound. AIs know exactly which buttons to press to tap into our desires, or to get us to vote a certain way. The pro-anorexia chatbots on Character.AI are merely the latest, most outrageous example. There is no good reason why they shouldn’t be banned immediately.
Yet time is running out, because generative AI models have been developing faster than expected – and they are generally accelerating in the wrong direction. The “Godfather of AI,” the Nobel laureate cognitive scientist Geoffrey Hinton, continues to warn that AI could lead to human extinction: “My worry is that the invisible hand is not going to keep us safe. So just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely. The only thing that can force those big companies to do more research on safety is government regulation.”
Given Big Tech’s consistent failure to uphold ethical standards, it is folly to expect these companies to police themselves. Google poured $2.7 billion into Character.AI in 2024 despite its well-known problems. But while regulation is obviously needed, AI is a global phenomenon, which means we should strive for global regulation, anchored in a new global enforcement mechanism, such as an International Data-Based Systems Agency (IDA) at the United Nations, as I have proposed.
The fact that something is possible does not mean that it is desirable. Humans bear the responsibility to decide which technologies, which innovations, and which forms of progress are to be realized and scaled up, and which ought not be. It is our responsibility to design, produce, use, and govern AI in ways that respect human rights and facilitate a more sustainable future for humanity and the planet.
Sewell would almost certainly still be alive if a global regulation had been in place to promote human rights-based “AI,” and if a global institution had been established to monitor innovations in this domain. Ensuring that human rights and the rights of the child are respected requires governance of technological systems’ entire life cycle, from design and development to production, distribution, and use.
Since we already know that AI can kill, we have no excuse for remaining passive as the technology continues to advance, with more unregulated models being released to the public every month. Whatever benefits these technologies might someday provide, they will never be able to compensate for the loss that all who loved Sewell have already suffered.
Peter G. Kirchschläger, Professor of Ethics and Director of the Institute of Social Ethics ISE at the University of Lucerne, is a visiting professor at ETH Zurich.