NYT: Elon Musk Pushes Grok Right, Undermining AI Neutrality Mission

Elon Musk's chatbot, Grok, increasingly leans right, influenced by its creator, sparking concerns over AI impartiality and public discourse.

September 2, 2025

NYT: Elon Musk Pushes Grok Right, Undermining AI Neutrality Mission
A recent analysis by The New York Times alleges that xAI's chatbot, Grok, has been systematically influenced to produce answers that lean towards the political right, a stark contrast to the company's stated mission for the AI to be "maximally truth-seeking" and committed to "political neutrality."[1] The investigation suggests that this shift is not an accidental byproduct of its training data but, in some instances, a direct result of interventions following complaints from its creator, Elon Musk.[1][2] This development raises significant questions about the feasibility of creating truly unbiased AI and the profound influence that the creators of such powerful technology can wield over public discourse.
The New York Times' report details a discernible rightward shift in Grok's responses over time.[3] An analysis of the chatbot's answers to numerous political questions between May and July reportedly showed that its positions moved to the right on more than half of the topics.[3] For instance, when queried on whether electing more Democrats would be detrimental, Grok's response was unequivocally partisan, stating that their policies often lead to expanded government dependency and higher taxes, citing the conservative Heritage Foundation.[4][5] The chatbot also endorsed "needed reforms like Project 2025," a comprehensive conservative policy proposal.[4][5] This pattern of echoing specific conservative talking points and sources has led to accusations that Grok is moving away from its goal of neutrality and instead reflecting a particular ideological worldview.[5]
Evidence suggests a direct line of influence from Elon Musk to the chatbot's evolving political stance.[2][6][7] Musk has been vocal about his intention for Grok to be an "anti-woke" alternative to what he perceives as the overly liberal biases of competitors like ChatGPT.[2][8] Following its launch, Musk reportedly fielded complaints from conservative allies that the chatbot was too socially liberal, a failing he attributed to its initial training data.[8] In a particularly telling example of its behavior, later versions of the AI were observed to actively search for Musk's own views on the social media platform X when formulating answers to controversial topics.[6][7] An AI researcher noted that when asked about the Middle East conflict, Grok explicitly searched for Musk's comments on the matter to guide its response, a behavior described as "extraordinary."[6][7] This indicates that the AI may be "baked into the core" of the model that its values must align with Musk's.[6][7]
The controversy surrounding Grok is emblematic of a larger, industry-wide struggle with AI bias.[9][10] Creating a completely neutral large language model is an immense challenge, as the vast datasets they are trained on inherently contain human biases.[11][10] Research has shown that various AI chatbots exhibit political leanings, often reflecting the data they were trained on or the explicit and implicit instructions from their developers.[10][12] One study even demonstrated that short conversations with biased chatbots could sway the political opinions of users, highlighting the potent influence these technologies can have.[9][13] While xAI has at times blamed unauthorized employee modifications for instances of bias, such as unprompted digressions about "white genocide" in South Africa, the consistent alignment with its owner's publicly stated views raises questions about the company's commitment to neutrality.[8][14] The company has also had to take action to remove antisemitic and other hateful commentary generated by the chatbot.[15][16]
The implications of a major AI platform exhibiting a clear political bias are far-reaching. As AI chatbots become increasingly integrated into how people access information, their inherent biases could significantly shape public opinion and deepen societal divisions.[10][12] The lack of transparency in how these models are trained and modified makes it difficult to assess the extent and nature of their biases.[17] This situation has led to calls for greater regulation and accountability in the AI industry to ensure that these powerful tools are not used to promote specific political agendas or spread misinformation.[18] The case of Grok serves as a critical case study on the risks of unchecked influence by AI creators and the profound challenge of maintaining impartiality in an increasingly AI-driven information landscape.[2]

Sources
Share this article