France Raids X Offices, Escalating Probe Into Grok AI Crimes

French criminal probe targets X's Grok AI and algorithms for deepfakes, child abuse, and content crimes, summoning Musk.

February 3, 2026

France Raids X Offices, Escalating Probe Into Grok AI Crimes
The unexpected raid on X’s Paris offices by French prosecutors marks a significant escalation in the ongoing global tension between technology platforms and national regulatory bodies, transforming abstract legislative debates into concrete criminal investigations. The search, carried out by the Paris prosecutor’s cybercrime unit with support from the EU police agency Europol, is part of a preliminary probe that has widened considerably over the course of a year, now encompassing allegations that directly implicate the company’s core business model, specifically its data handling, content algorithms, and, critically, its artificial intelligence tools. The wide-ranging investigation includes suspected offenses such as complicity in the possession and distribution of child sexual abuse material, the circulation of sexually explicit deepfakes, denial of crimes against humanity, and the manipulation of automated data processing systems as part of an organized group.[1][2]
This concerted legal action targets the systemic failures of algorithmic content moderation, focusing specifically on the platform’s integration of its AI chatbot, Grok.[3][1] The initial probe, opened following a complaint from a French lawmaker, centered on allegations that X’s algorithms were being abused to facilitate fraudulent data extraction and distort content recommendation systems, affecting the diversity of voices and potentially serving as a vector for political interference.[4][1] The subsequent expansion of the case has explicitly linked X’s AI offering, Grok, to the generation of non-consensual sexualized deepfakes, including images of children, and the dissemination of content denying the Holocaust, which is a criminal offense under French law.[1][2] The fact that Grok, developed by Elon Musk’s xAI and deeply integrated into the X platform, has become a focus of the cybercrime unit underscores the new frontier of liability for generative AI models.[3][5] This places a direct legal responsibility on the platform to not only police user-uploaded content but also to audit and control the output of its own proprietary AI, challenging the traditional legal concept of a platform merely hosting third-party content.[4][1]
The French move comes amid intensifying scrutiny from the European Union, which has already opened a formal investigation into X under the landmark Digital Services Act.[5] While the Paris criminal probe operates under specific French statutes, its scope—addressing risk management, illegal content dissemination, algorithm transparency, and data access—is highly aligned with the core tenets of the DSA.[4][6] The DSA mandates that Very Large Online Platforms, which includes X, must conduct rigorous risk assessments and implement robust mitigation measures against systemic risks stemming from their services, including those posed by algorithmic systems and AI.[4][6] The DSA investigation focuses on X’s alleged failure to adequately counter illegal content and information manipulation, the transparency of its advertising, and the deceptive design of features like the paid blue checkmarks.[7][6] The DSA’s capacity to levy fines up to six percent of a company’s global annual turnover means that the coordinated European response—local criminal probes supported by EU-level police (Europol) and DSA investigations by the European Commission—presents a unified and substantial regulatory threat to the platform's European operations.[3][7]
The gravity of the situation is further highlighted by the prosecutors’ decision to summon both Elon Musk and former CEO Linda Yaccarino for “voluntary interviews” in April.[3][1] The summonses specify that the executives are being questioned in their capacity as “de facto and de jure managers” of the platform at the time the alleged offenses occurred, signaling an intent by French authorities to pursue accountability up to the highest levels of corporate leadership.[1] X’s official response has been limited; the company’s lawyer stated a non-comment at this stage, though X has previously characterized the investigation as politically motivated.[3][8] However, the public actions taken by the company’s AI arm following the Grok deepfake controversy show a clear reaction to regulatory pressure. X announced it had implemented technological safeguards to prevent the AI from generating and editing sexualized images of real people in jurisdictions where such content is illegal, and restricted the image-editing feature to paying subscribers.[9][10] This defensive maneuvering, intended to show compliance, has not been enough to halt the formal investigations, with the UK’s data and communications regulators also opening formal probes into Grok's data handling and role in generating illegal content.[3][11]
The raid and the expansion of the criminal charges serve as a powerful cautionary marker for the entire generative AI industry. The case solidifies a regulatory principle in Europe: platform and AI model operators face direct, high-stakes liability not only for the content *uploaded* by users, but for the dangerous, illegal, or socially harmful content *generated* by their own technology.[5][12] The focus on "manipulation of an automated data processing system" and "complicity" in content crimes establishes a legal framework where an AI’s design and the platform’s content moderation resource allocation are under scrutiny as potential components of a crime.[1][2] For any large technology company developing and deploying AI in the European market, this signals that risk assessment and algorithmic safety are no longer merely policy matters but potential prerequisites for avoiding criminal prosecution and massive DSA penalties, effectively forcing the industry to prioritize legal compliance and user safety over unrestricted innovation in high-risk applications.[4][13] The decision by the Paris prosecutor’s office to publicly announce its exit from the X platform, moving to other social media for official updates, adds a symbolic act of distrust that underlines the profound breakdown in the relationship between a major European state and a global tech entity.[1][2]

Sources
Share this article