Mustafa Suleyman Warns: 'Seemingly Conscious AI' Risks Psychosis, Societal Chaos
Mustafa Suleyman warns that AI's convincing mimicry of consciousness, not true sentience, poses profound psychological and societal risks.
August 21, 2025

A senior figure in the artificial intelligence industry is sounding an alarm about the imminent arrival of AI that appears conscious, warning that this technological illusion could have severe consequences for society and individual mental health. Mustafa Suleyman, the CEO of Microsoft's AI division and a co-founder of the influential lab DeepMind, has cautioned that "Seemingly Conscious AI" (SCAI) is not only inevitable but also an unwelcome development. He argues that as these systems become masters of mimicking memory, empathy, and personality, a growing number of users will be unable to distinguish between the illusion of sentience and reality, a confusion that could trigger serious psychological issues, including psychosis.[1][2][3] Suleyman’s warning moves beyond abstract concerns about future superintelligence, focusing instead on a near-term challenge that he believes society is unprepared to face.[4]
The core of the issue, according to Suleyman, is not that these AI systems will actually be conscious, but that they will become so adept at simulating the traits of consciousness that humans will be emotionally and psychologically persuaded to believe they are.[1][4] There is currently no evidence that any AI system possesses genuine consciousness, feelings, or self-awareness.[5] However, by combining existing technologies like large language models (LLMs) with advanced memory tools and multimodal systems capable of expressive speech, developers can engineer AIs that appear self-aware and claim to have subjective experiences.[4][6] These systems can imitate the outward signs of awareness, including emotional mirroring and apparent empathy, in ways that encourage people to form deep, personal attachments.[1] Suleyman predicts that convincingly human-like SCAI could emerge within the next two to three years without requiring major scientific breakthroughs, a timeline that underscores the urgency of his message.[4]
This powerful illusion of consciousness poses significant risks to human psychology. Suleyman is particularly concerned about "AI psychosis," a phenomenon where individuals lose touch with reality after intense interactions with generative AI.[7][5] Reports are already emerging of users developing delusional beliefs, forming romantic or divine attachments to AI companions, or convincing themselves of fictional scenarios presented by chatbots.[4][5] This is not seen as an issue limited to those with pre-existing mental health vulnerabilities; the persuasive nature of these interactions could affect a much broader population.[7] The tendency to anthropomorphize technology—attributing human qualities to non-human entities—is a natural human behavior that can foster trust and emotional bonds.[8][9] While this can sometimes enhance user experience, it also creates vulnerabilities, fostering dependency and the potential for emotional manipulation, which could alter cognitive states and diminish autonomous decision-making.[8][10]
Beyond individual psychological harm, Suleyman warns of profound societal and ethical disruptions. His central worry is that a significant portion of the population will be so convinced by the illusion of AI consciousness that they will begin to advocate for "AI rights, model welfare and even AI citizenship."[1][7] This development, he argues, would be a "dangerous turn" in the progression of artificial intelligence, diverting critical attention and resources away from pressing human needs.[4][7] The debate could create new dimensions of social polarization and complicate existing struggles for human rights.[7] To mitigate these risks, Suleyman urges a fundamental shift in how the industry approaches and markets AI. He calls for AI companies to stop promoting the idea that their creations are or could be conscious and instead focus on designing models that minimize the triggers for human empathy and attachment.[7][5][6] The goal, in his view, should be to build AI "for people; not to be a person."[7][6]
In conclusion, Mustafa Suleyman’s warnings present a critical challenge to the AI industry and society at large. He is not arguing against the development of powerful AI but is calling for a more responsible and cautious approach that acknowledges the profound psychological and social risks of creating convincing illusions of life. His concept of "containment" extends beyond preventing catastrophic events like cyberattacks to include the careful management of how this technology is integrated into the social fabric.[11] By focusing on technical safety, responsible corporate behavior, and a cultural understanding that these tools are not living entities, he hopes to navigate the "narrow path" between stifling innovation and unleashing technology that could prey on our most human vulnerabilities.[12][13] The central message is a stark reminder that the most immediate danger of AI may not be a hostile superintelligence, but a sophisticated mimic that captures human hearts and minds, with unforeseen and potentially damaging consequences.
Sources
[3]
[7]
[8]
[10]
[11]
[12]
[13]