EU's Anti-Disinformation AI Chatbot Fails, Spreading Outdated, Incorrect News

A highly anticipated EU-funded AI chatbot, designed to combat disinformation, is ironically delivering outdated and incorrect information.

July 1, 2025

EU's Anti-Disinformation AI Chatbot Fails, Spreading Outdated, Incorrect News
A highly anticipated, EU-funded artificial intelligence chatbot designed to deliver reliable news on European affairs is instead providing outdated and incorrect information, undermining its stated mission to combat disinformation. The platform, named ChatEurope, was launched by a consortium of fifteen European media organizations with the promise of offering citizens verified information "without the influence of disinformation and fake news."[1] However, independent testing has revealed significant flaws in the chatbot's knowledge base and accuracy, raising serious questions about the oversight of publicly funded AI projects and the readiness of such technology for public-facing news applications.
The project, led by Agence France-Presse (AFP), brings together prominent media outlets including Deutsche Welle, France Médias Monde, and El País.[2] Co-funded by the European Commission, ChatEurope was presented as a technologically advanced solution to a critical problem: the spread of false narratives across the continent.[1] The core of the platform is a conversational agent developed by the Romanian company DRUID AI, which utilizes a large language model from the French firm Mistral.[3] Its core design principle is that it formulates answers solely based on thousands of articles provided by the consortium members, a feature intended to guarantee reliability and provide users with sourced, verified information.[4] The platform launched in seven languages and the chatbot itself is capable of answering queries in all official EU languages, aiming to help citizens better understand how European decisions impact their lives.[1]
Despite these ambitions, the chatbot's performance has proven to be unreliable. In one notable example, when asked about the president of Germany, the chatbot incorrectly named Angela Merkel, who left office in 2021. The current president, Frank-Walter Steinmeier, has been in office since 2017. This error is particularly glaring given that one of the consortium partners is Germany's international broadcaster, Deutsche Welle. The chatbot also demonstrated a significant knowledge gap regarding current events in European politics. When queried about the latest European Parliament elections, it referred to the 2019 results and stated that the next election would be in 2024, an event that had already passed at the time of the inquiry. Furthermore, the AI was unable to provide information about Ursula von der Leyen's recent bid for a second term as President of the European Commission, a major ongoing news story.
The consistent delivery of outdated information points to a fundamental issue with the data being used to train and operate the chatbot. While the platform is supposed to draw from the verified news content of its media partners, it appears there is a significant lag or failure in updating its knowledge base with current information.[3] This problem is not unique to ChatEurope; the AI industry as a whole grapples with the challenge of keeping large language models current and preventing them from "hallucinating" or fabricating information.[5][6] However, for a platform explicitly designed and funded to be a bastion against fake news, these failures are particularly damaging to its credibility. The promise of a personalized experience navigating reliable content is unfulfilled if the foundational information is flawed.[1]
The implications of ChatEurope's troubled launch extend beyond this single project. It serves as a cautionary tale for the burgeoning field of AI in journalism. While the potential for AI to help process vast amounts of information and engage audiences is clear, this case highlights the critical importance of rigorous, continuous fact-checking and data updating. For public institutions like the European Union that are investing in technological solutions to societal problems, it underscores the need for stringent quality control and accountability for the projects they fund.[7] The goal of using AI to build trust in media can only be achieved if the technology itself is trustworthy. As it stands, ChatEurope risks becoming an example of the very problem it was created to solve, delivering confident but incorrect statements that could mislead the citizens it aims to inform. The consortium of respected media outlets now faces the challenge of rectifying these fundamental errors to salvage the reputation of a project launched with the laudable goal of strengthening informed, democratic discourse across Europe.

Research Queries Used
ChatEurope chatbot outdated incorrect answers
EU-funded ChatEurope news platform
ChatEurope AI chatbot criticism
ChatEurope project funding and partners
response to ChatEurope errors
Share this article