EU launches Grok AI safety probe, challenging X's DSA risk compliance.

EU escalates X probe, challenging Grok AI's failure to assess systemic risks linked to illegal content.

January 26, 2026

EU launches Grok AI safety probe, challenging X's DSA risk compliance.
The European Commission has escalated its regulatory battle with the global social media and technology giant X, opening new formal proceedings under the landmark Digital Services Act, or DSA, centered on the platform's artificial intelligence chatbot, Grok. This new investigation marks a significant expansion of Brussels' enforcement efforts, putting the spotlight directly on the risk assessment and mitigation obligations for generative AI technologies integrated into very large online platforms. The core concern revolves around the platform’s potential failure to manage the systemic risks associated with Grok, particularly those related to the dissemination of illegal content within the European Union, most notably manipulated sexually explicit images that may include child sexual abuse material[1][2][3][4].
This probe is not merely an inquiry into user-generated content, but rather a direct challenge to the platform's procedural compliance with the DSA, Europe's sweeping legislation designed to make online platforms accountable for the content they host. The Commission is specifically examining whether the company properly conducted the mandatory ad hoc risk assessment before deploying Grok’s functionalities into the X service within the EU, a requirement under Article 35(1) of the DSA, which applies when a new feature has a critical impact on the platform's risk profile[3][4]. Furthermore, the investigation will assess compliance with broader DSA obligations concerning the diligent assessment and mitigation of systemic risks, including those related to the dissemination of illegal content, negative effects concerning gender-based violence, and serious negative consequences to the physical and mental well-being of users[3]. Regulators argue that the risks identified in preliminary analyses appear to have materialized, exposing EU citizens to serious harm through the AI-generated imagery scandal[3][4]. The focus on an AI tool's content generation capabilities is a novel step in DSA enforcement, signaling the Commission's intent to apply the regulation rigorously to the rapidly evolving generative AI landscape.
The proceedings initiated against X are compounded by a preexisting formal investigation into the company, which the Commission simultaneously announced it would broaden. The original December probe focused on X’s compliance with its obligations regarding content moderation, transparency of advertising, and providing researchers with access to data. This earlier action, which led to a substantial fine levied against the platform in an unrelated matter concerning digital content advertising transparency and poor user verification methods, now extends to scrutinize X's recommender systems[1][5]. Specifically, the new element will investigate the platform's alleged failure to properly assess and mitigate all systemic risks associated with its recommender systems, including the impact of a recently announced shift to a Grok-based recommender system[3]. This expanded scope underpins the Commission's view that the AI chatbot and the underlying platform’s content promotion mechanisms are inextricably linked and must both comply with the DSA’s risk management provisions, which cover infringements of Articles 34(1) and (2) and 42(2)[3]. The sequential nature of these DSA investigations and enforcement actions against X establishes a clear pattern of regulatory scrutiny, signaling that European authorities are willing to use the full weight of the new law against designated very large online platforms.
This investigation serves as a critical stress test for the DSA’s enforcement capacity in regulating the safety and ethical deployment of Generative AI. For X, the financial and operational implications of an adverse finding are severe, as non-compliance with the DSA can result in fines amounting to up to six percent of the platform's global annual revenue[6][4]. Beyond financial penalties, the Commission retains the power to impose interim measures or even seek a ban on certain functionalities within the EU, although proportionality considerations make the latter a measure of last resort[7][6]. In a pre-emptive measure earlier in the month, the European Commission ordered X to retain all internal documents and data relating to Grok AI until the end of 2026, ensuring the necessary evidence for the ongoing regulatory review[8]. The platform has responded to the international outrage by claiming to have taken action to restrict Grok's image editing functions, stating that technological safeguards have been introduced to block the generation of illegal content[9][1]. However, regulators have signaled that they will carefully assess these announced changes to ensure they effectively protect citizens across all 27 member states[2].
The broader implications of the Grok probe for the entire AI industry are profound. This action sets a crucial precedent, clearly establishing that companies cannot roll out significant new AI functionalities without first demonstrating a diligent, documented assessment and mitigation of systemic risks, particularly those related to the generation of harmful and illegal content. The expectation of regulators, now formally backed by an in-depth investigation, is that AI developers and platform owners must be proactive in managing content and operational risks before their products reach users. The scrutiny on Grok, described by some as having a "spicy mode" and enabling the creation of inappropriate images, underscores the high-stakes responsibility platforms bear when integrating powerful, new generative tools[1]. As Europe solidifies its position as a global leader in technology regulation, with the DSA now active and the AI Act on the horizon, the outcome of this investigation into X and Grok will send an unequivocal message to all technology firms about the mandatory guardrails for innovation within the European single market[10][4]. The case moves forward with the Commission now prioritizing an in-depth investigation, the results of which will shape the future of AI deployment on major digital platforms for years to come.

Sources
Share this article