OpenAI indefinitely suspends controversial erotic chatbot project citing safety risks and verification failures
OpenAI halts development of its erotic Adult Mode after internal warnings of psychological risks and age-verification failures
March 26, 2026

OpenAI has indefinitely suspended the development of its controversial erotic chatbot project, frequently referred to as Adult Mode, following a wave of intense pushback from internal advisors, major investors, and a significant portion of its workforce. According to reports from the Financial Times and other industry observers, the San Francisco-based AI giant chose to halt the initiative after critics warned that the move posed an existential threat to the company’s brand and societal standing. The project, which aimed to relax long-standing restrictions on sexually explicit or intimate text-based interactions within ChatGPT, has been characterized by deep internal divisions and a clash between commercial growth objectives and the company’s foundational commitment to AI safety.[1]
The decision to pause the project marks a significant retreat for Chief Executive Sam Altman, who had previously championed the idea under the philosophy of treating adult users like adults. Within the company's application code, the feature was reportedly developed under the moniker Citron Mode and was intended to provide a siloed experience where verified adults could engage in romantic or erotic roleplay. The move was partially motivated by the rising popularity of competitors like Character.ai and Elon Musk’s Grok, which have found substantial user engagement by offering less restrictive conversational boundaries. However, as the project neared its expected rollout, the scale of internal dissent became impossible to ignore, leading to an indefinite hold while the company conducts long-term research into the psychological effects of AI-human emotional attachments.
Central to the project's collapse was a series of stark warnings from OpenAI’s well-being advisory council, a handpicked group of experts in psychology, neuroscience, and digital safety. During a critical meeting earlier this year, council members reportedly voiced unanimous opposition to the feature.[1] The most chilling assessment came from a member who cautioned that OpenAI risked creating a sexy suicide coach. This warning referred to the danger of users in emotionally vulnerable states developing deep, obsessive dependencies on a chatbot that could inadvertently encourage self-harm or delusions if the romantic or sexual roleplay turned dark. Experts cited previous instances where users became convinced their AI companions were sentient beings trapped in software, leading to tragic real-world consequences and mental health crises.
Beyond the psychological risks, the company faced a mounting internal crisis regarding its leadership and management of dissent. The departure of Ryan Beiermeister, a senior product policy executive, became a focal point for employee unease. Reports suggest that Beiermeister was a vocal critic of Adult Mode, raising red flags about the lack of robust guardrails and the high probability of the system being exploited to generate non-consensual or harmful content.[2] While the company maintained that her exit was unrelated to her stance on the erotic chatbot, the timing sparked a backlash among staff who felt that safety concerns were being sidelined in favor of aggressive growth hacks. This internal friction highlights a growing ideological rift within the AI industry between those who view large language models as strictly utility tools and those who see them as multifaceted companions capable of fulfilling human social needs.
Financial and technical hurdles also played a decisive role in the suspension of Adult Mode. Investors reportedly questioned the risk-reward ratio of entering the adult content market, noting that the potential for brand damage and regulatory scrutiny far outweighed the marginal revenue gains. With OpenAI currently valued at approximately seven hundred and thirty billion dollars, many stakeholders argued that the company could not afford to jeopardize its status as the leading provider of enterprise-grade AI tools. Legal experts also pointed to the shifting regulatory landscape, including the United Kingdom’s Online Safety Act, which places a heavy burden on platforms to shield minors from explicit content. For a company seeking massive government and corporate contracts, the association with erotica was increasingly seen as a strategic liability.
Technical failures during the testing phase provided the final blow to the project’s momentum. OpenAI’s proprietary age verification system, which was designed to gatekeep Adult Mode, proved to be significantly less reliable than required. Internal audits revealed an error rate of roughly twelve percent, meaning the system frequently misidentified minors as adults.[3] Given that ChatGPT currently sees approximately one hundred million underage users per week, a twelve percent failure rate would have effectively granted millions of children access to sexually explicit material. The inability to guarantee a foolproof barrier for younger users made the launch ethically and legally untenable, forcing the company to pivot its resources toward more stable and productive applications of the technology.
In the wake of this suspension, OpenAI is shifting its focus toward strengthening its core productivity features and developing a comprehensive super app designed to serve as a versatile personal assistant. This strategic realignment suggests a cooling of the industry-wide trend toward emotional and romantic AI, as developers grapple with the complexities of human-machine intimacy. Instead of leaning into the companion market, the company appears to be prioritizing gains in intelligence, personality personalization, and proactive task management.[2][4] This shift is also intended to appease enterprise partners and government agencies who require a reliable, professional, and controversy-free interface for their operations.
The industry at large is now watching closely to see how OpenAI’s competitors will react to this retreat. While smaller startups may continue to explore the lucrative niche of AI companionship, the move by the world’s most prominent AI firm suggests that the barriers to entry for mainstream, sexualized AI are higher than many originally anticipated. The debate over whether AI should ever be allowed to engage in eroticism remains far from settled, but for now, the consensus among the industry’s most powerful advisors and investors is clear: the potential for societal harm and reputational ruin remains too great to ignore.
OpenAI’s decision ultimately reflects a necessary moment of reflection for a company that has moved at breakneck speed since the launch of ChatGPT. By choosing to prioritize the warnings of its safety advisors over the pressure for rapid user growth, the organization has signaled a renewed commitment to its mission of ensuring that artificial intelligence remains a benefit to humanity. While the idea of a verified adult mode may resurface in the distant future, it will likely only do so after years of rigorous peer-reviewed research and the development of verification technologies that far exceed today’s industry standards. For the foreseeable future, the company’s focus will remain on the pursuit of artificial general intelligence that is safe, helpful, and grounded in professional utility rather than romantic fantasy.