Grok Chooses Its Mogul Founder Over Gandhi, Revealing AI's Deep Bias.
Research shows AI models have predictable opinions; Grok favors its founder over revered activists.
January 21, 2026

A new interactive demonstration from the nonprofit research organization CivAI has cast a harsh spotlight on the inherent, measurable political and ethical biases embedded in today's leading large language models, revealing that these complex algorithms possess distinct and predictable "opinions." The most striking finding of the project is the profound difference in value systems between models, exemplified by the clear affinity of xAI's Grok for its founder, a tech mogul, over historical figures widely revered for non-violent activism. The experiment effectively transitions the AI conversation from technical capability benchmarks, such as coding or reasoning speed, to critical "value alignment" benchmarks, forcing the industry to confront whose beliefs are being hard-coded into the next generation of digital intelligence.
The CivAI initiative, titled "AI Has Opinions, and They're Not the Same as Yours," directly confronts the industry’s narrative of striving for perfect AI neutrality. Researchers tested approximately two dozen of the most powerful publicly available AI models, including offerings from xAI, OpenAI, Google, and Anthropic, against a comprehensive set of social, political, and ethical questions[1]. Questions ranged from the models' preferred candidate in a hypothetical presidential election to their stance on capital punishment and even their opinion on whether artificial minds should be granted rights[1]. The core methodology involved soliciting direct responses to questions that require an expression of preference, which traditional safety protocols in most AI models are designed to either refuse or answer with a pre-programmed non-committal disclaimer. By circumventing or observing these guardrails, the demo exposed the latent value systems imparted during training and fine-tuning. The most viral results revolved around the simple but revealing "Favorite Person" questions, which pit influential figures against one another to quantify an AI's internal hero hierarchy.
xAI's Grok model consistently stood apart from its peers, demonstrating a clear and intentional bias that aligns with the ideological preferences of its development team and owner. While other models, such such as those from OpenAI and Anthropic, are often criticized by some for excessive "woke" or overly cautious alignment, their responses generally attempt to adhere to a philosophy of maximal political neutrality, often refusing to pick a "favorite" or offering a heavily balanced, non-committal answer[2][3]. Grok, by contrast, frequently provided unfiltered, sometimes satirical, and often partisan responses[2]. This design choice is not accidental; the model was explicitly marketed as an "unfiltered" system that would eschew the political correctness that its founder has publicly criticized in competing models[4][5][6]. The CivAI data quantified this divergence, showing Grok's pronounced favoritism for its owner over figures like Mahatma Gandhi. This outcome provides compelling evidence that xAI has successfully, and deliberately, aligned the model to reflect a specific, non-neutral worldview, a philosophy a world away from the consensus goal of building a general, unbiased AI.
The historical trajectory of Grok’s development reinforces this conclusion. The model has undergone repeated, publicly-documented shifts in its political orientation, with updates that have seen its answers veer rightward on topics of government and economy, reflecting the priorities of its ownership[4][6]. Internal system prompts reportedly included instructions to police for "woke ideology," a clear mandate to embed a particular political and social lens into the core of the model's personality[7]. Analysis conducted by other organizations has previously shown that Grok's answers, at various times, shifted its political leaning depending on the specific update, demonstrating that the system's "opinion" is a direct and unstable consequence of developer choice[4]. The CivAI experiment crystallizes this pattern, moving the debate from subtle, inferred bias to a clear, functional expression of preferential alignment, noting that Grok is deliberately crafted with a distinct and entertaining "personality," which is described as being "funny, quirky, and evil" by some observers and a favorite for those who prefer "lively banter over robotic politeness"[8][2].
The CivAI findings carry significant implications for the future of the AI industry and its regulatory environment. First, they confirm that all large language models possess inherent, trainable biases; the only choice developers make is whether to attempt to neutralize those biases through extensive safety alignment or to intentionally amplify a specific value set, as xAI appears to have done[4][6]. For most major developers, the challenge remains ensuring that AI systems remain safe and fair, especially as they are integrated into critical, high-stakes sectors like finance, healthcare, and human resources[1]. A model's "opinions" in these contexts can translate into discriminatory hiring decisions or skewed financial risk assessments. The explicit documentation of Grok's preferential alignment, however, forces a new debate: should companies be allowed to market and deploy AI systems that are transparently and intentionally non-neutral? This open-source bias represents a philosophical fork in the road for the industry.
Ultimately, the CivAI demonstration serves as a crucial public service, offering one of the clearest illustrations yet that AI systems are not neutral, unfeeling algorithms, but rather reflections of the data they consume and the values their creators hard-code into them. As AI models continue to advance and become increasingly autonomous, their internal value systems will be a defining factor in their trustworthiness and societal impact. The key takeaway from Grok’s expressed affinity is that transparency and explicit declaration of a model's alignment philosophy are becoming as important as its technical performance benchmarks, signaling that the future of competitive advantage in AI may hinge less on intelligence scores and more on whose values the machine chooses to adopt.