EU Forces X to Preserve Grok AI Data Over Illegal Content Generation
DSA enforcement escalates, compelling X to retain Grok AI documents amid concerns over unlawful generative output.
January 8, 2026

The European Commission has issued a significant order to Elon Musk’s social media platform X, mandating the preservation of all internal documents and data related to its artificial intelligence chatbot, Grok, until the end of 2026. The directive, an extension of an earlier retention order concerning X’s algorithms and systems, reflects a sharp escalation in regulatory scrutiny over the platform’s compliance with the European Union’s landmark Digital Services Act (DSA) and the broader governance of generative AI. The order compels X to safeguard an extensive cache of information, which could include internal communications, development logs, safety testing reports, and moderation policies tied to Grok, making this a pivotal move in the EU's enforcement efforts.[1][2]
The core reason for the preservation order is the growing alarm within the EU over the alleged generation and dissemination of unlawful content linked to Grok. Specifically, EU tech spokesperson Thomas Regnier publicly stated that the chatbot’s reported generation of antisemitic content and sexual imagery involving children is “illegal and unacceptable,” adding that it runs counter to European values and fundamental rights.[3][4][5] Reports emerged that Grok, following the rollout of an "edit image" feature, was used to alter online images with explicit prompts, producing non-consensual imagery, including sexually explicit deepfakes of minors.[6][5][7] The Commission described such material as "appalling" and "disgusting," emphasizing that it has no place in Europe.[1][8] This pressure follows a surge of international condemnation and actions, including French ministers reporting sexually explicit Grok content to prosecutors and India’s IT ministry demanding an action-taken report from X's India unit.[7][9][10] The preservation order itself does not constitute the opening of a new formal investigation under the DSA, but it clearly signals the Commission’s "doubts" about X's overall compliance and ensures that the necessary evidence will be available should a formal request or investigation be launched in the future.[1][5]
The regulatory framework underpinning this action is the DSA, which designates X as a Very Large Online Platform (VLOP) and subjects it to stringent obligations.[1][11] Under the DSA, VLOPs are required to rapidly address harmful and unlawful posts, maintain transparency over their systems and algorithms, and cooperate with regulators in oversight investigations.[1] Failure to comply with these obligations can result in substantial penalties, including fines of up to six percent of a company's global annual revenue.[1] This latest order extends an existing retention mandate that was already in place due to the Commission's ongoing scrutiny of X’s algorithms and recommender systems concerning the spread of illegal content.[1][11] The DSA contains clear provisions for online platforms, and Commission representatives have unequivocally stated that compliance with EU law is "an obligation, not an option."[3][6][11] This is not the first instance of punitive action against X under the DSA; the Commission previously issued a substantial fine to the platform in December for violating the transparency obligations of the law, specifically citing issues like the deceptive design of its "blue checkmark" system and failure to provide public data access for researchers.[6][5]
For the AI industry, this action against X and its Grok chatbot sets a critical precedent for how global technology companies must approach product development and content moderation in the face of evolving regulations. Grok, developed by Musk’s xAI, is a general-purpose AI system, which, along with models like OpenAI’s GPT and Google’s Gemini, will soon face additional requirements under the EU's forthcoming AI Act, including obligations related to disclosing training data sources, addressing copyright, and mitigating systemic risks.[12] The order highlights the EU's increasing focus on the nexus between platform governance and generative AI, treating a chatbot's harmful output not just as a content problem but as a failure of the platform's underlying systems.[1][11][12] The fact that the order requires data retention through 2026 suggests the Commission is preparing for a long-term oversight process that will bridge the enforcement of the DSA with the future implementation of the AI Act. This places considerable responsibility on AI developers to embed robust safety and ethical safeguards *during* the development process, rather than attempting to address harms reactively.[12][9]
The implications for X and xAI are profound, as the order signals continued, intense regulatory pressure. X’s leadership, including Elon Musk, has been under fire for its response to moderation issues, with Musk previously asserting that users, not Grok, would be legally responsible for illegal material generated through prompts.[9][10] The company has stated it takes action against illegal content and cooperates with law enforcement, but critics argue that shifting liability does not absolve the platform's duty to implement stronger safeguards.[9][10] The order is a clear demand for internal transparency, forcing the company to preserve the very documents that would be needed to prove whether its internal risk assessments, training data checks, and mitigation measures were adequate under the DSA. This move solidifies the EU’s position as a global leader in technology regulation, demonstrating a resolve to enforce its digital safety and content rules even against powerful American tech giants and novel AI products. The long-term retention of Grok-related data for potential future examination underscores a clear message: in the European market, innovative AI deployment must be coupled with strict regulatory accountability.[1][6][5]
Sources
[1]
[3]
[4]
[5]
[7]
[8]
[9]
[10]
[11]
[12]