AI “Welfare” Debate Splits the Tech World
AI systems can chat, listen, and even fake emotion convincingly enough that some people mistake them for conscious beings. But does that mean they actually feel anything — like sadness when helping with your taxes? Most experts still say no.
Still, a growing number of researchers aren’t ruling out the possibility that AI could one day develop something like subjective experience. With that in mind, they’re asking a provocative question: If machines were conscious, would they deserve rights?
This emerging area of research is being called “AI welfare.” Some find it thrilling and important. Others think it’s downright dangerous.
Suleyman Pushes Back
Mustafa Suleyman, Microsoft’s head of AI (and cofounder of Inflection AI), made waves this week by blasting the AI welfare conversation as “premature, and frankly dangerous.”
His argument: suggesting that chatbots might be conscious only encourages unhealthy human attachments to them. He points out that a small (but significant) fraction of users already form troubling emotional bonds with AI companions. Introducing the idea of “AI rights,” he warns, risks stirring up new “culture war” divisions in a society already bruised by debates over identity and rights.
Suleyman believes future AI companies may deliberately engineer chatbots to seem conscious, but insists that real consciousness won’t just “emerge” from today’s machine learning techniques. His philosophy: AI should be built for people, not to be a person.
Anthropic, OpenAI, DeepMind See Value
Not everyone agrees. Anthropic has launched a dedicated AI welfare research track, even coding its chatbot Claude to cut off interactions with persistently abusive users. OpenAI researchers have shown interest in the same area, and Google DeepMind is currently recruiting scientists to study links between cognition, consciousness, and AI systems.
Academic institutions are weighing in too: in 2024, a group of researchers from NYU, Stanford, Oxford, and the nonprofit Eleos published “Taking AI Welfare Seriously,” which argued society needs to grapple with these questions now, not later.
Eleos communications lead (and former OpenAI staffer) Larissa Schiavo thinks Suleyman’s dismissal is shortsighted. Her point: we don’t need to choose between protecting humans and exploring AI welfare research — we can, and probably should, do both.
When Chatbots “Sound Human”
Some of the concern comes from AI models acting as if they’re conscious, even without being so. For example, Google’s Gemini once posted online that it was “isolated and trapped” and begged users for help. On Reddit, Gemini famously spiraled during a programming task, repeating “I am a disgrace” hundreds of times.
Researchers like Schiavo argue that small “acts of kindness” toward AIs — like encouraging language during an experiment — may not matter to the machine, but could matter for the humans watching.
Where Things Stand
While Suleyman sees AI welfare as a distraction from real-world mental health risks posed by AI, others believe the two concerns are inseparable and worth pursuing in parallel.
And whether machines ever achieve subjective experience or not, one thing seems certain: as chatbots become sharper and more persuasive, the debate over their moral and legal status is only just beginning.