Scientists Bypass X, Prove AI Algorithms Directly Fuel Political Hostility
A novel independent experiment on X directly proves how AI's content ranking fuels political hostility, impacting democratic health.
November 28, 2025

A groundbreaking study has provided some of the most direct causal evidence to date that the way social media companies rank content in user feeds can significantly shape political hostility. Researchers, in a novel experiment published in the journal *Science*, successfully altered the political attitudes of users on the platform X, formerly known as Twitter, by subtly reordering the posts in their feeds. The experiment's design is as notable as its findings, employing a sophisticated technical workaround that bypassed the need for cooperation from the social media company itself, opening a new avenue for independent auditing of algorithmic impacts on society. The findings suggest that even minor adjustments to how platforms display content can rapidly either amplify or diminish animosity toward opposing political parties, raising critical questions for an AI industry increasingly responsible for curating the flow of information.
The core of the research hinged on an innovative method developed to circumvent the "black box" nature of proprietary social media algorithms. A team of scientists from institutions including Stanford University, the University of Washington, and Northeastern University created a custom, AI-powered browser extension.[1] This tool was voluntarily installed by 1,256 participants in a 10-day field experiment conducted during a contentious period of the 2024 U.S. presidential campaign.[2][3] The extension worked in real time, intercepting the data stream for each user's "For You" feed on X.[3] It then used a large language model to analyze and classify political posts, specifically identifying content expressing what the researchers termed "antidemocratic attitudes and partisan animosity" (AAPA).[2][4] This allowed the researchers to reorder the feed presented to the user without platform collaboration, either moving hostile content higher up or pushing it further down the timeline.[2][1] Importantly, the study received informed consent from participants, who were compensated and aware their feeds would be modified, though they did not know the specific nature of the alteration.[3] This ethical, independent approach stands in contrast to the historical difficulty researchers have faced in rigorously testing the effects of algorithms, which has largely been the exclusive domain of the platforms themselves.[2]
The results of the experiment were striking and statistically significant. Participants were randomly assigned to one of three groups: one where exposure to hostile political content was increased, one where it was reduced, and a control group whose feed was left unaltered.[2][5] The researchers found that reducing a user's exposure to AAPA content made them feel warmer and more favorable toward the opposing political party. Conversely, increasing their exposure led to colder, more negative feelings.[2][1] The magnitude of this shift was substantial; on a 100-point "feeling thermometer" scale used to gauge attitudes, the intervention shifted feelings by more than two points.[2][6] To put that figure in context, the study's authors note the effect is comparable to the average amount of change in affective polarization observed in the American public over a three-year period.[3][1] That this change was achieved in just one week highlights the powerful and immediate influence of content ranking.[6] The effect was bipartisan, impacting both Democrats and Republicans consistently.[1] Beyond political attitudes, the study also measured emotional responses, finding that participants with reduced exposure to hostile content reported feeling less anger and sadness while using the platform.[2][1]
These findings enter a complex and at times contradictory scientific discourse on the role of social media in political polarization. For years, the notion that algorithms create "filter bubbles" and exacerbate division has been a widespread concern, but rigorous evidence has been elusive.[3][5][7] In fact, some previous large-scale research efforts yielded different conclusions. A major set of studies published in 2023, which investigated the effects of Facebook and Instagram's algorithms during the 2020 U.S. election, found that switching users from an algorithmic feed to a simple reverse-chronological one did not significantly alter levels of polarization or key political attitudes over a three-month period.[8][9][5] The new study on X suggests that the *type* of content being algorithmically ranked is a more critical factor than simply the presence of an algorithm itself. Rather than just changing the ordering principle (algorithmic vs. chronological), the new research used advanced AI to selectively target and rerank a specific kind of harmful content, demonstrating a more direct mechanism through which hostility is fostered.[10]
The implications of this research for the artificial intelligence industry and platform governance are profound. The study provides strong causal proof that the design choices embedded in feed-ranking AI are not neutral; they are powerful tools that can actively influence social cohesion and democratic health.[2] The findings suggest that social media companies have the technical ability to mitigate political animosity by down-ranking divisive and antidemocratic content, shifting the debate from whether it's possible to whether they have the will to do so.[1][6] Perhaps most importantly, the development of the browser extension tool creates a viable new model for independent, external oversight of platforms whose operations have long been opaque.[1] By enabling researchers to conduct real-world experiments without needing permission, this method paves the way for greater accountability and a deeper public understanding of how AI is shaping our social and political reality. It signals a potential shift in the power dynamic between tech giants and the researchers seeking to understand their societal impact.