X employs AI to draft Community Notes, boosting fact-checking speed.
X embarks on a pivotal AI experiment to accelerate fact-checking, weighing speed against critical human judgment and bias.
July 2, 2025

In a significant move that could reshape how misinformation is handled on social media, the platform X is turning to artificial intelligence to assist in writing its crowd-sourced fact-checking labels, known as Community Notes. This new pilot program will allow AI bots, including the company's own Grok chatbot, to draft contextual notes on posts flagged as potentially misleading.[1][2] The initiative aims to significantly increase the speed and scale of the platform's fact-checking capabilities, a development that carries both promise and considerable risks for the technology industry and the broader information ecosystem.[3][4] The core concept behind this change is to accelerate a process that has been criticized for its slowness, thereby addressing false narratives more swiftly before they can achieve widespread viral reach.[1][5]
The mechanics of this new system represent a notable shift from a purely human-driven process to a human-AI collaboration.[1] Initially launched as Birdwatch in 2021 and expanded under its new ownership, Community Notes has operated on the principle of crowd-sourcing, allowing approved contributors to write and rate notes that add context to posts.[5][6] Under the pilot program, developers can build and submit their own "AI Note Writers" for review.[3][4] If deemed helpful in practice runs, these bots can then be deployed to automatically draft notes when users request context on a specific post.[7][4] However, X has emphasized that humans will remain central to the process.[8][9] AI-generated notes will not be published automatically; they must first be rated as "helpful" by a diverse group of human contributors, the same standard applied to notes written by people.[2][10] The company states this creates a "powerful feedback loop," where community ratings help train the AI to become more accurate and less biased over time.[3][9] The ultimate decision on whether a note is displayed will still rest with human judgment.[4][10]
The potential benefits of integrating AI into this process are primarily centered on efficiency and scale.[11] Proponents, including X's product executive overseeing the program, Keith Coleman, argue that AI can "deliver a lot more notes faster with less work," potentially leading to a "significant" increase in the number of notes published.[4][9] This speed is critical, as research has shown that the faster a contextual note is attached to misleading content, the more effective it is at curbing its spread.[1] One study found that Community Notes could reduce reposts of false information by nearly 46%.[1] By automating the initial drafting, a significant bottleneck in the process, X hopes to provide context on a much larger volume of posts, keeping pace with the relentless flow of information and misinformation on the platform.[5][12] This move is also part of a broader trend, with other major platforms like Meta, TikTok, and YouTube developing similar community-based fact-checking systems inspired by X's model.[1][10][13]
Despite the potential for increased efficiency, the introduction of AI into fact-checking is fraught with significant concerns and criticisms. A primary worry is the inherent limitation of current AI technology in understanding nuance, context, and sarcasm, which are fundamental to human communication.[14][15] AI models are trained on vast datasets and can perpetuate or even amplify existing biases present in that data, potentially leading to unfair or discriminatory moderation decisions.[16][17] The environment at X, which some analyses suggest has seen a rise in bot activity and hate speech since its acquisition, provides a challenging backdrop for deploying AI fact-checkers.[5] Critics argue that while AI can process facts, it lacks the human judgment necessary for true contextual understanding, a crucial element in effective fact-checking.[5][18] Furthermore, there are fears that the system could be gamed by sophisticated actors or that the AI could be influenced, consciously or not, to align with the perspectives of the platform's ownership, a concern heightened by past instances of the owner publicly criticizing his own AI's outputs.[3][2] The move fundamentally shifts Community Notes from a process valued for its deliberate, human-centric collaboration to one that prioritizes automated speed, potentially eroding the user trust it was built on.[5]
In conclusion, X's decision to employ AI in drafting Community Notes marks a pivotal experiment in the ongoing battle against online misinformation. The initiative's success hinges on whether the platform can effectively balance the promise of AI-driven speed and scale with the indispensable need for human nuance, oversight, and unbiased judgment.[1][4][9] While the retention of human reviewers in the final approval process is a critical safeguard, the system's susceptibility to algorithmic bias, contextual misinterpretation, and potential manipulation remains a significant concern for the AI industry and users alike.[19][20][16] The outcome of this pilot program will be closely watched, as it could set a new precedent for content moderation, determining whether AI becomes a powerful tool for enhancing truth and transparency on social platforms or an instrument that introduces new, more complex challenges to the information landscape.[5][14]
Sources
[2]
[4]
[5]
[7]
[8]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]