Public Distrust Becomes AI's Single Biggest Obstacle
A profound public trust deficit, fueled by deep-seated fears, threatens to derail AI's economic promise and global competitiveness.
September 22, 2025

A profound disconnect is emerging between the promise of artificial intelligence and its public perception, creating a significant hurdle for the technology's future growth and adoption. While governments and technology leaders herald AI as a driver of economic prosperity and efficiency, a widespread public trust deficit threatens to derail these ambitious agendas. A detailed analysis by the Tony Blair Institute for Global Change (TBI) and Ipsos reveals that this skepticism is not a vague unease but a concrete barrier, with a lack of trust being the single biggest obstacle preventing people from using generative AI.[1][2] This growing chasm between the architects of AI and the public they aim to serve poses a critical challenge to realizing the technology's transformative potential. The danger is that a lack of trust could stall progress, hindering the potential of AI to solve real-world problems.[3]
The depth of this public skepticism is captured in stark numbers across multiple studies. A recent survey based on Edelman data indicates that trust in companies developing AI has fallen, with the drop being particularly pronounced in the United States, where it plunged from 50% to just 35% in five years.[4][3] The TBI/Ipsos poll further quantifies the unease in the UK, where nearly twice as many people view AI as a risk to the economy (39%) rather than an opportunity (20%).[1][5] This sentiment is not uniform, often varying by demographics and familiarity with the technology. Younger people and those who use AI frequently tend to be more optimistic.[2] For instance, among individuals who have never used AI, 56% perceive it as a societal risk, a figure that drops to 26% for weekly users.[4][6] This highlights a critical familiarity gap, where exposure and experience can significantly soften anxieties and build comfort.
The roots of this distrust are multifaceted and deeply embedded in public concerns about the societal impact of AI. Fears of widespread job displacement, the proliferation of misinformation and deepfakes, and the potential for AI to reinforce existing biases and discrimination are prominent among the public's anxieties.[3][7] The "black box" nature of many AI systems—where even their creators cannot fully explain the rationale behind a specific decision—fuels a sense of powerlessness and suspicion.[8] Concerns over data privacy and cybersecurity are also paramount, with a KPMG study revealing that cybersecurity fears were a top reason for distrust among 86% of wary respondents.[9] These apprehensions are compounded by a perceived lack of accountability and oversight, leaving many to feel that the technology is advancing far more rapidly than the ethical frameworks and regulations needed to govern it. This sentiment is captured by the majority of Americans who favor government regulation for AI and the more than half who believe tech companies are not adequately considering ethics in their pursuit of innovation.[10]
The consequences of this trust deficit extend far beyond public opinion, carrying significant economic and strategic implications for the AI industry and national competitiveness. A skeptical public is less likely to adopt AI-driven products and services, which can slow market growth and reduce the return on massive investments in research and development.[11] This reluctance can create a self-reinforcing cycle: low adoption due to mistrust prevents the public from gaining the positive firsthand experiences that could build that very trust.[6] More critically, widespread public opposition could translate into political pressure for overly restrictive regulations that stifle innovation or, conversely, a complete rejection that undermines government efforts to modernize public services and boost productivity.[12][13] The issue has also been framed as a matter of national security, with warnings that negative public sentiment could undermine the financial and congressional support necessary for a nation to remain competitive in the global AI race.[14]
In response to this growing crisis of confidence, both the AI industry and governments are initiating strategies aimed at building a more trustworthy ecosystem. Technology companies are increasingly focused on the principle of "explainable AI" (XAI), developing systems that can provide a clear chain of thought behind their conclusions to move away from the opaque "black box" model.[8][14] Some firms are giving customers, particularly in sensitive government and national security sectors, more direct control over data inputs and model behavior.[14] On the governmental side, strategies include greater investment in public AI literacy and training programs, establishing clear ethical frameworks, and promoting citizen engagement through public consultations and feedback mechanisms.[9][15] The aim is to create a sense of shared ownership and to prove to the public that AI will be a tool that works for them, not something that happens to them.[13] This involves shifting the focus from the technical intricacies of AI to its tangible, real-world benefits in areas the public values, such as healthcare.[4][3] Ultimately, bridging the trust gap will require a concerted and sustained effort to embed transparency, accountability, and public collaboration into the very fabric of AI development and deployment.