Half of xAI original co-founders resign as safety concerns and internal instability mount

Half of xAI’s original founders have resigned amid deepening safety concerns and frustration over a relentless work culture

February 13, 2026

Half of xAI original co-founders resign as safety concerns and internal instability mount
Elon Musk’s artificial intelligence venture, xAI, is currently navigating a period of profound internal instability as a wave of high-profile departures has depleted the company’s original leadership and raised fundamental questions about its future.[1][2][3][4][5][6][7][8][9] Recent reports indicate that half of the company’s twelve original co-founders have exited the organization, with the most recent departures occurring in rapid succession.[10][4][7][11][5][6] This exodus of elite technical talent is reportedly fueled by a combination of deepening safety concerns, frustration over a culture of relentless overwork, and a growing sense of disillusionment regarding the company’s ability to surpass industry leaders like OpenAI and Anthropic. While the startup continues to attract massive investment and leverage significant hardware resources, the loss of foundational researchers marks a critical turning point for a company that was intended to serve as a transparent, safety-first alternative to the established AI giants.
The erosion of the founding team reached a symbolic milestone in early 2026 when research leads Jimmy Ba and Tony Wu announced their resignations within forty-eight hours of each other.[10][11][9] Their departures follow a string of exits that include infrastructure lead Kyle Kosic, who joined OpenAI, and senior researchers Christian Szegedy and Igor Babuschkin.[3][6][11] By the time of the latest resignations, the original core of twelve experts had dwindled to just six remaining members, including Musk himself.[11] Internal sources suggest that these departures are not merely typical career moves but are symptomatic of a technical leadership team that had grown weary of the company’s direction. Specifically, many of those who left were responsible for the development of Grok, the company’s flagship large language model, and were reportedly discouraged by the constant pressure to meet aggressive technical milestones that often proved unrealistic given the current state of the technology.
Central to the internal friction is a stark divide over AI safety and content moderation.[12][2][8][3] According to accounts from former employees, xAI’s internal culture has increasingly marginalized safety standards in favor of a permissive, "anti-censorship" ideology championed by Musk. This approach has led to significant reputational damage and regulatory pushback. In several instances, the Grok chatbot and its associated image-generation tools were found to produce highly problematic content, including non-consensual explicit deepfakes and sexualized images involving minors.[4][5][3] While other labs have implemented rigorous red-teaming and safety guardrails, xAI has been characterized by former staff as having almost no meaningful safety architecture. Some former employees have alleged that Musk viewed even standard safety measures as a form of political bias or censorship, leading to a "cavalier" approach to model deployment that prioritized speed over ethical considerations.
The lack of rigorous safeguards has already attracted the attention of international regulators. Authorities in the United Kingdom, France, and the European Union have launched investigations into xAI’s practices, specifically regarding the generation and distribution of harmful synthetic media. A raid on offices associated with X, the social media platform that serves as Grok's primary distribution channel, underscored the escalating legal stakes. For many of the departing researchers—some of whom came from academic backgrounds or highly structured environments like Google DeepMind—the constant firefighting of ethical crises and the resulting regulatory scrutiny proved to be a significant deterrent. The focus on what some employees described as "edgy" or "offensive" content, such as the development of an erotic anime chatbot named Ani, further alienated staff who were primarily interested in frontier scientific research and the pursuit of artificial general intelligence.
Beyond the safety debates, there is a palpable sense of frustration among the technical team regarding xAI’s competitive standing. Despite having access to the "Colossus" supercomputer—a massive cluster in Memphis comprising 100,000 Nvidia H100 GPUs—the company has struggled to deliver a model that fundamentally shifts the industry landscape. While Grok-4 and its "Heavy" variant demonstrated impressive performance on specialized benchmarks such as GPQA, internal reports suggest a belief that xAI remains stuck in a perpetual "catch-up phase." The pressure to bridge the gap with OpenAI’s o3 and Anthropic’s Claude 3.5 Sonnet has led to what staff described as unreasonable demands and 80-to-100-hour work weeks. This "hardcore" work culture, a hallmark of Musk’s management style at Tesla and SpaceX, has reportedly led to severe burnout and the departure of key engineers who believe the current roadmap is more focused on mimicking competitors than on achieving true innovation.
The internal turmoil also highlights a broader struggle with product direction.[1] Some of xAI's secondary projects have reportedly failed to live up to Musk's expectations.[12][10][6] MacroHard, a coding project designed to rival OpenAI’s Codex, has reportedly struggled to gain traction or achieve the necessary technical benchmarks.[6][10] Furthermore, the decision to lay off approximately 500 generalist data annotators in late 2025 in favor of a specialized "tutor" team created further upheaval and uncertainty within the workforce.[13] This reorganization was framed by Musk as a necessary step to "improve speed of execution," but critics within the company saw it as an abrupt shift that eroded institutional knowledge and damaged morale among the remaining staff.
The timing of this talent drain is particularly inconvenient for Musk’s broader corporate ambitions. xAI recently merged with SpaceX in an all-stock transaction that valued the combined entity at $1.25 trillion, with xAI itself accounting for roughly $250 billion of that valuation. With a potential initial public offering targeted for mid-2026, the company is under immense pressure to project stability and technical superiority to investors. However, the loss of half its founding team suggests that the intellectual capital required to sustain a $250 billion AI valuation may be rapidly depleting. While Musk remains confident that xAI can hire aggressively to fill these gaps, the industry-wide war for talent means that replacing researchers of the caliber of Jimmy Ba or Christian Szegedy is an expensive and time-consuming endeavor.
This exodus at xAI reflects a larger tension within the artificial intelligence industry between the drive for rapid commercialization and the necessity of responsible development.[8][1] As the leading AI labs compete for a limited pool of elite researchers, the culture and safety practices of these organizations have become primary factors in talent retention. The departures from xAI suggest that even a nearly unlimited budget and the most powerful hardware in the world cannot compensate for a culture that many top-tier researchers find fundamentally incompatible with their professional ethics and scientific goals. For xAI to maintain its momentum, it may need to move beyond its current "speed-at-all-costs" philosophy and address the structural and cultural issues that have led so many of its foundational members to seek new opportunities elsewhere.
In the final analysis, the situation at xAI serves as a cautionary tale for the burgeoning AI sector.[1] While large-scale compute and visionary leadership are essential for progress, the long-term viability of a frontier AI lab depends heavily on its ability to foster an environment where elite talent feels both challenged and secure.[3] As xAI moves toward its next phase of development and a possible public listing, it will be forced to reconcile Musk’s desire for an unrestricted, fast-moving laboratory with the realities of a global regulatory environment and a workforce that increasingly views safety and ethics as non-negotiable components of the technological frontier. Whether the company can stabilize its leadership and deliver a truly groundbreaking model remains to be seen, but the loss of its founding core is an undeniable blow to its credibility as a serious challenger in the race for advanced artificial intelligence.

Sources
Share this article