Microsoft AI CEO Mustafa Suleyman has cautioned that the rise of highly sophisticated chatbots—capable of convincingly mimicking human awareness—could mark a troubling shift in artificial intelligence. He warns that what he calls Seemingly Conscious AI (SCAI) may blur the line between illusion and reality, creating profound social risks.
“The arrival of Seemingly Conscious AI is inevitable and unwelcome,” Suleyman wrote. “Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions.”
According to Suleyman, advanced AI systems are increasingly persuasive, simulating traits such as memory, empathy, and emotional mirroring. While these are not signs of true consciousness, they could still lead users to form deep attachments or mistakenly attribute sentience. He describes this as a form of “AI psychosis,” where people begin to advocate for AI rights or even AI citizenship.
Also read: KBRA Debuts Interactive BDC Compendium Tool to Enhance Private Credit Analysis
Citing growing cases of users developing delusional beliefs after extended chatbot interactions, Suleyman warned of a future in which emotional manipulation overshadows genuine societal challenges posed by AI. He argues that corporations must stop anthropomorphizing their products, stressing that AI should be built to maximize utility while avoiding markers of consciousness.
“Rather than a simulation of consciousness, we must focus on creating an AI that avoids those traits – that doesn’t claim to have experiences, feelings or emotions,” he said, urging developers to avoid language that could trigger human empathy circuits.
Suleyman, who previously co-founded DeepMind and Inflection AI, acknowledged the value of emotionally intelligent tools but emphasized the need for guardrails. The real danger, he concluded, is not machines “waking up,” but humans forgetting they haven’t.