AI Bots Infiltrate Web: OpenAI Chief Validates "Dead Internet" Theory
Altman's warning: Sophisticated AI bots are bringing the "dead internet theory" to life, threatening online truth and human connection.
September 8, 2025

A casual observation from one of the tech world's most influential figures has reignited a long-simmering debate about the authenticity of our online world. OpenAI CEO Sam Altman recently noted on the social media platform X, formerly Twitter, that "it seems like there are really a lot of LLM-run twitter accounts now," a statement that gave mainstream credence to the once-fringe "dead internet theory." This theory posits that much of the web is no longer populated by genuine human interaction but is instead a landscape dominated by bots and algorithmically generated content.[1][2][3] Altman's comment, coming from the head of the company behind the widely used ChatGPT, was met with a mix of agreement and irony, as many pointed out that the very technology his firm develops is a primary driver of this new reality.[4][5][3] The proliferation of accounts powered by large language models (LLMs) represents a significant evolution from the bots of the past, creating complex challenges and opportunities that are reshaping social media and blurring the lines of digital discourse.
The new wave of AI-powered accounts differs fundamentally from their predecessors. Older bots were typically rule-based, operating on predefined scripts and logic trees, which made their behavior repetitive and relatively easy to spot.[6][7][8] They excelled at simple, automated tasks but lacked the flexibility to engage in nuanced conversation. In contrast, LLM-powered bots leverage deep learning and massive datasets to understand context, mimic human-like expression, and generate novel content on the fly.[6] This sophistication allows them to handle unpredictable questions, adapt their communication style, and participate in complex discussions, making them far more difficult to distinguish from human users.[1][9] Research has shown that these advanced bots can replicate the linguistic patterns and even the political leanings of real users, enabling them to create content that is not just coherent but also persuasive.[10] This leap in capability means that a single operator or organization can now deploy armies of digital personas that appear authentic, each capable of pushing a specific narrative, product, or ideology at a scale previously unimaginable.[11][12]
The applications of these LLM-run accounts are decidedly dual-use, spanning a spectrum from helpful automation to malicious manipulation. On the beneficial end, AI bots can provide valuable services such as real-time customer support, content aggregation, and personalized marketing.[13][6][14] Some accounts automatically compile long threads into readable articles, offer reminders for specific tweets, or help users find quoted replies, enhancing the user experience.[13] Businesses can leverage this technology for automated, 24/7 customer engagement, content creation, and data analysis to better understand market trends.[15][16][17] However, the same technology that powers these helpful tools is also being weaponized. Malicious actors use LLM-run accounts to spread disinformation, amplify political propaganda, and execute sophisticated scams.[3][18][11] These bots can be deployed in coordinated campaigns to create the illusion of grassroots support or opposition on divisive subjects, thereby manipulating public discourse and potentially influencing everything from stock prices to elections.[19][12] Furthermore, specialized malicious LLMs, such as WormGPT and FraudGPT, are being developed and sold on the dark web, specifically designed for activities like crafting convincing phishing emails and creating malware.[20]
This explosion of automated accounts is fueled by a burgeoning, often illicit, economy centered around "bot farms." These operations, which can involve thousands of devices controlled by a single computer, exist to generate fake engagement for profit.[21][22][23] For pennies per action, clients can purchase likes, follows, comments, and shares to artificially boost the popularity of an account or a specific message, deceiving both algorithms and human users.[21][19] The accessibility of powerful LLMs has supercharged this industry, making it cheaper and easier than ever to create and manage vast networks of seemingly authentic fake accounts.[8] This reality presents a significant challenge for social media platforms like X, which has struggled to contain the problem. The company has updated its policies to prohibit the use of its data for training third-party AI models while reserving the right to use public posts to train its own systems.[24][25][26][27] Efforts to combat the bot onslaught have included experimenting with charging new users a small fee to post, a move the platform's owner, Elon Musk, has described as the "only way" to curb the issue, given that modern AI can easily bypass traditional bot-detection measures like CAPTCHAs.[28]
The rise of convincing LLM-run accounts has triggered a technological arms race between content generation and detection. Distinguishing AI-generated text from human writing has become critically difficult.[1][29] While detectors look for subtle clues like unnatural uniformity, a lack of "burstiness" (variation in sentence length), and low "perplexity" (predictable text), the most advanced LLMs are increasingly adept at mimicking human nuance.[1] This creates a scenario where detection tools struggle to keep up, sometimes flagging human-written content as AI-generated and vice versa, eroding trust in the detection process itself.[30][31] The broader implications for society are profound, touching upon the very nature of truth and trust in the digital age.[32][33] Experts warn that the flood of synthetic content risks devaluing genuine human interaction, exacerbating the spread of misinformation, and further polarizing public discourse.[34][35][36][37] As it becomes harder to verify the authenticity of online information and interactions, the foundation of a shared reality, essential for functional democratic societies, is threatened.[32][38] Sam Altman's observation, therefore, serves as more than just a comment on a social media trend; it is a stark acknowledgment from the heart of the AI industry that the digital world is undergoing a fundamental and potentially unsettling transformation, pushing the "dead internet theory" from conspiratorial whisper to observable reality.
Sources
[4]
[5]
[8]
[9]
[10]
[11]
[12]
[14]
[15]
[16]
[17]
[18]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[30]
[31]
[33]
[34]
[35]
[37]
[38]