AI Secretly Writes 1 in 10 News Articles, Eroding Public Trust

Your news, shaped by AI: study reveals 9% of articles, especially local, are undisclosed AI content, jeopardizing public trust.

November 1, 2025

AI Secretly Writes 1 in 10 News Articles, Eroding Public Trust
A significant and largely invisible shift is occurring within American journalism as a study from the University of Maryland reveals that nearly one in ten newspaper articles are at least partially written by artificial intelligence, almost always without the reader's knowledge.[1] This revelation raises profound questions about journalistic transparency, the evolving role of technology in newsrooms, and the potential erosion of public trust. The study, which analyzed 186,000 articles from 1,500 U.S. newspapers, found that approximately 9% of this content was either fully or partially generated by AI.[2] The lack of disclosure accompanying these articles means that consumers of news are increasingly reading content shaped by algorithms without any awareness of the technology's involvement, a reality with significant implications for both the media and the burgeoning AI industry.
The research, conducted using the highly accurate Pangram AI detector, highlights a stark divide in AI adoption between national and local news outlets.[2][1] While only 1.7% of articles from large-circulation newspapers showed signs of AI involvement, this figure jumps to 9.3% for smaller, local papers.[2][1] This disparity suggests that resource-strapped local newsrooms may be turning to AI as a means of survival, automating routine tasks to compensate for shrinking staff and budgets.[2] The study further identified specific content areas where AI is most prevalent, with weather stories leading at 27.7%, followed by science and technology at 16.1%, and health at 11.7%.[2] In contrast, more sensitive topics such as war and crime saw significantly lower rates of AI integration.[2] This calculated deployment indicates that news organizations are, for now, reserving human journalists for more nuanced and high-stakes reporting. However, the study's manual analysis of 100 articles flagged as containing AI-generated text found that a mere five included any form of disclosure, underscoring the pervasive lack of transparency surrounding this practice.[2]
The ethical implications of this widespread but clandestine use of AI are substantial, striking at the core of journalistic principles. Transparency is a cornerstone of trust between a news organization and its audience.[3][4] When readers are unaware that the news they are consuming has been generated or assisted by a non-human entity, it creates a potential for deception and undermines the credibility of the publication. Experts argue that at a minimum, news outlets have an obligation to inform their audience about the role of AI in their content.[4] A survey highlighted that 94% of people believe journalists should disclose their use of AI.[3] This sentiment is echoed by various journalism ethics organizations that are now developing guidelines for the responsible use of AI, with disclosure being a central tenet.[5][6] Beyond transparency, there are concerns about the inherent limitations of current AI models, which are known to "hallucinate" or generate false information.[7][8] The publication of AI-generated content without rigorous human oversight could lead to the spread of misinformation, with potentially serious consequences.
For the AI industry, the surreptitious integration of its technology into the news ecosystem presents a complex mix of opportunities and risks. On one hand, the demand for AI tools in newsrooms validates the technology's utility and opens up a significant market. News content is also incredibly valuable for training large language models, leading to a burgeoning market for data licensing deals between AI companies and publishers.[9][4] However, the lack of transparency and the potential for AI-generated inaccuracies in news articles raise serious questions about liability. If an AI model produces defamatory or false information that is then published, it is unclear where the legal responsibility lies—with the news organization, the user of the tool, or the AI developer.[10][11] This ambiguity could lead to significant legal and reputational challenges for AI companies. Furthermore, the erosion of public trust in news, partly driven by the undisclosed use of AI, could ultimately harm the AI industry by association, leading to increased public skepticism and calls for stricter regulation.
In conclusion, the University of Maryland's study serves as a critical wake-up call, illuminating a rapidly changing media landscape where the line between human and machine-generated content is increasingly blurred. The findings underscore an urgent need for a robust and open conversation about the role of artificial intelligence in journalism. For news organizations, this means developing and adhering to clear ethical guidelines that prioritize transparency and maintain the integrity of their reporting. For the AI industry, it is a moment to consider its responsibilities in a sector that is fundamental to a functioning democracy. Ultimately, failing to address the challenge of undisclosed AI in news reporting risks not only the credibility of journalism but also the public's trust in the very information they rely on to understand the world, a consequence that would be detrimental to all.[7]

Sources
Share this article