AI Boom Outpaces Governance, Risks Undermining Trust and Value
AI's rapid surge is outpacing governance, creating a dangerous disconnect between executive confidence and critical trust.
June 4, 2025

Artificial intelligence adoption is surging across industries, with businesses aggressively investing in and deploying AI technologies to drive efficiency, innovation, and competitive advantage. However, this rapid integration is largely outpacing the development and implementation of robust AI governance frameworks and risk management practices, according to a recent EY Responsible AI Pulse survey. The findings reveal a concerning disconnect: while many C-suite leaders express strong confidence in their organizations' responsible AI efforts, this optimism often doesn't align with consumer sentiment or even the views of their own CEOs, and significant gaps persist in oversight, risk awareness, and preparedness for the complexities of advanced AI systems.[1] This chasm between rapid technological advancement and lagging governance presents substantial implications for the AI industry, potentially undermining trust, exposing organizations to new risks, and hindering the realization of AI's full, sustainable value.[1][2][3]
A key finding from the EY research highlights a significant misalignment between the confidence levels of C-suite executives (excluding CEOs) and the concerns of both consumers and chief executive officers regarding the ethical and societal impacts of AI.[1] Nearly two-thirds (63%) of these C-suite leaders believe their organizations are well-aligned with consumer perceptions and use of AI.[1] However, when compared with data from the EY AI Sentiment Index study, which surveyed 15,060 consumers across 15 countries, this confidence appears misplaced.[1] Consumers consistently express greater concern than these executives across a range of responsible AI principles, including accuracy, privacy, explainability, and accountability, often being twice as likely to worry that companies will fail to uphold these principles.[1] CEOs, in contrast, tend to show a greater appreciation for these concerns, aligning more closely with consumer sentiment.[1] This suggests that many business leaders may be overestimating the maturity of their responsible AI practices or underestimating the importance of consumer trust, which is paramount for widespread AI adoption, especially in sensitive sectors like finance, health, and public services.[1] The EY AI Sentiment Index Study further reveals significant gaps between consumers' openness to AI and their actual adoption of the technology, underscoring the critical role of trust and transparent risk management.[1]
The rapid pace of AI adoption, particularly with emerging technologies like agentic AI where systems operate with increasing autonomy, is further exposing deficiencies in existing governance structures.[1][4] While most companies report having responsible AI principles in place, the practical application of these principles into comprehensive governance frameworks is lagging.[1][5] The EY survey, which polled 975 C-suite leaders from organizations with over US$1 billion in annual revenue across major sectors and 21 countries, found that a significant segment of companies using or planning to use advanced AI technologies within the next year do not possess even a moderate level of familiarity with the associated risks.[1] Another EY survey focusing on US senior business leaders found that while the importance of ethical AI is acknowledged, only about a third of organizations are building AI governance frameworks fully and at scale (34%) or addressing bias in AI models fully and at scale (32%).[6][5] This lack of preparedness is a critical concern, as ungoverned AI deployment can lead to a host of problems, including regulatory backlash, data breaches, the perpetuation of biases, and adversarial AI attacks.[2] The phenomenon of "shadow AI," where employees use unapproved AI tools without corporate oversight, is also creating significant security blind spots and compliance risks.[2][7] Organizations that fail to implement robust governance may find themselves "retrofitting compliance at enormous cost" as regulations evolve.[2]
Despite these challenges, there is a growing recognition of the need for stronger AI governance and responsible AI practices.[8][9] The EY US research indicates that 61% of senior business leaders whose organizations are investing in AI reported growing interest in responsible AI practices over the past year, an increase from 53% six months prior.[8] Furthermore, 59% plan to increase the time their organization will spend training employees on the responsible use of AI, up from 49%.[8][10] However, this increased interest is yet to translate into comprehensive action across the board.[6][5] The implications of this governance gap are far-reaching. Beyond the immediate risks of security breaches and regulatory non-compliance, the failure to build trust through responsible AI can hinder consumer adoption and ultimately limit the technology's transformative potential.[1][2] There's also evidence of "AI fatigue" setting in among both employees and leaders, with around half of business leaders reporting declining company-wide enthusiasm for AI integration and a similar proportion of leaders feeling they are failing amid AI's rapid growth.[8][10][9] This underscores the need for clear communication, transparency, and education to foster a more engaged and confident workforce.[10] The financial sector, for example, is seeing rapid AI adoption for operational efficiency and regulatory compliance, but authorities are urging for the addressing of information gaps and an assessment of current policy frameworks due to potential systemic risks related to third-party dependencies, market correlations, cyber risks, and model integrity.[11]
In conclusion, the accelerated adoption of artificial intelligence presents immense opportunities, but the current lag in robust governance frameworks poses significant risks that could impede long-term success and societal trust. The EY Responsible AI survey underscores a critical need for organizations to move beyond mere acknowledgement of AI ethics to the concrete implementation of comprehensive governance structures, risk management protocols, and transparent communication with stakeholders.[1] Business leaders, particularly those in the C-suite, must align their confidence with a deeper understanding of consumer concerns and the evolving risk landscape associated with advanced AI.[1] Investing in responsible AI practices, including thorough risk assessments, bias mitigation, and continuous employee training, is not merely a compliance exercise but a strategic imperative for building trust, ensuring security, and unlocking the sustainable value of AI.[3][5][12] As AI continues its rapid evolution, a proactive and adaptive approach to governance will be paramount in navigating the complexities and ensuring that this transformative technology is developed and deployed in a manner that is both innovative and accountable.[2][13]