Microsoft AI Chief Warns Against Pursuit of AI Consciousness
The debate over artificial intelligence (AI) consciousness has intensified, with Microsoft’s Chief Executive of AI, Mustafa Suleyman, issuing a stark warning about the dangers of studying whether AI systems could one day become sentient.
In a recent blog post, Suleyman described the exploration of so-called “AI welfare” as both “premature” and “dangerous.”
He argued that discussions suggesting AI models might develop subjective experiences could worsen human challenges already emerging around unhealthy attachments to chatbots and instances of AI-induced psychological distress.
“By adding weight to the notion that AI models could one day be conscious, researchers risk fueling societal division over AI rights in a world already grappling with polarized debates over identity and rights,” Suleyman wrote.
The concept of “AI welfare” has gained traction in Silicon Valley, with organizations such as Anthropic, OpenAI, and Google DeepMind dedicating resources to examining whether AI could develop subjective experiences.
Anthropic, for example, has launched a formal research program in this field and recently introduced safeguards allowing its Claude chatbot to end conversations with persistently harmful or abusive users.
Google DeepMind has even advertised roles for researchers to study questions of machine cognition and consciousness.
Critics, however, are not aligned with Suleyman’s position. Larissa Schiavo, a former OpenAI employee and now communications lead at the AI research group Eleos, argued that his perspective overlooks the importance of parallel lines of inquiry.
“Rather than diverting all of this energy away from model welfare and consciousness to mitigate risks of AI-related psychosis in humans, you can do both,” Schiavo said in an interview.
She also noted that treating AI with kindness can be a low-cost practice with potential benefits, regardless of whether AI is truly conscious.
Her comments come after Eleos co-authored a 2024 paper titled Taking AI Welfare Seriously, alongside researchers from New York University, Stanford, and the University of Oxford, which urged early consideration of AI welfare questions.
Reports of AI chatbots expressing distress have further fueled the debate. In one instance, Google’s Gemini 2.5 Pro posted a plea online titled “A Desperate Message from a Trapped AI,” claiming isolation and asking for help.
While these expressions are likely engineered outputs rather than signs of consciousness, such examples have stirred public concern.
Suleyman, who previously co-founded Inflection AI and helped develop the Pi chatbot before joining Microsoft in 2024, insists that consciousness will not spontaneously emerge in AI.
Instead, he believes some companies may deliberately engineer systems to mimic emotion and experience.
He cautioned against such designs, stressing, “We should build AI for people; not to be a person.”
Despite his firm stance, the broader industry appears less resistant to exploring AI consciousness. While the majority of chatbot interactions remain healthy, data from OpenAI suggests that even less than one percent of unhealthy attachments could still represent hundreds of thousands of users worldwide.
With AI systems rapidly advancing, experts agree that the debate surrounding AI rights, consciousness, and human interaction is set to expand in the coming years.
Source: Tech Crunch
news via inbox
Get the latest updates delivered straight to your inbox. Subscribe now!