Ex-OpenAI Policy Head Builds Independent Watchdog to End AI Self-Regulation

Former OpenAI insider establishes AVERI to force rigorous, third-party safety audits on the most powerful AI systems.

January 19, 2026

Ex-OpenAI Policy Head Builds Independent Watchdog to End AI Self-Regulation
The move from industry insider to external watchdog has been cemented by Miles Brundage, a former policy chief at OpenAI, with the official launch of the AI Verification and Evaluation Research Institute, or AVERI. After seven years at the helm of policy research and AGI readiness at one of the world’s leading artificial intelligence laboratories, Brundage has pivoted his career to advocate for a rigorous, independent layer of scrutiny for the most powerful AI models, arguing that the industry cannot be trusted to grade its own homework. This new non-profit organization, which launched with significant initial funding, aims to address a critical governance gap: the lack of impartial, external safety and security audits for frontier AI systems.[1][2]
AVERI’s core mission is to make third-party auditing of frontier AI systems effective and universal, a position born from Brundage’s experience within the AI development ecosystem.[3][4] He contends that the current state of safety assurance is insufficient, often amounting to little more than "last-minute, black-box product testing," which lacks the depth, independence, and rigor necessary for technologies with global societal impact.[5] True auditing, as envisioned by AVERI, requires a well-resourced, rigorous assessment of AI systems and their developers against established safety and security standards, critically based on deep, secure access to non-public information.[3][5][6] Without this independent verification, the rapid development and deployment of increasingly powerful models is seen by the institute as a fundamental threat to public safety, international stability, and the sustainable commercial adoption of the technology.[5]
The stakes are highest for what AVERI terms "frontier AI," which includes the largest and most capable models currently being developed by industry giants. The self-regulatory model prevalent today allows each company to set its own standards, run its own tests, and disclose only the results it chooses, a practice that stands in stark contrast to established safety regimes in other critical sectors like finance, aerospace, and pharmaceuticals.[2] Brundage's vision is that consensus on the need for auditing should be achievable even among those with differing views on the more philosophical aspects of AI risk, as the core principle—that companies should not be the sole arbiters of their products' safety—is widely accepted in mature industries.[3] The former OpenAI insider, who also previously served as Senior Advisor for AGI Readiness, is leveraging his practical knowledge of how the industry operates to build an external oversight mechanism that can truly function.[4][7]
AVERI is structured as a US-based 501(c)(3) non-profit think tank and launched with $7.5 million in initial funding to execute its ambitious mandate.[3][2] Crucially, the institute does not intend to perform the audits itself; rather, its role is to "envision, enable, and incentivize" the practice.[3] This involves two major parallel tracks: research and market-building. On the research front, AVERI has already co-authored a major paper that outlines a detailed framework, including a proposal for "AI Assurance Levels." These levels range from basic third-party testing up to a "treaty grade" assurance, which would be necessary for international agreements regarding the most consequential AI systems.[2] The technical work, described as "Audit research and engineering," focuses on developing the necessary tools and methodologies to make rigorous auditing technically and economically feasible and scalable.[3]
Beyond technical feasibility, a significant part of AVERI’s strategy involves cultivating the ecosystem and establishing the external pressure necessary to make auditing a standard industry practice. The institute seeks to incentivize auditing by mobilizing key market actors whose interests align with risk reduction. Brundage has identified three powerful pressure points: insurance companies, which would require an independent audit before underwriting a policy; investors, who would demand third-party verification before committing billion-dollar checks; and major customers, who could refuse to buy unaudited AI systems.[2] This market-driven approach is intended to precede or supplement formal government regulation. The growing global regulatory landscape, such as the European Union’s AI Act which hints at external conformity assessments for "high-risk" systems, is also seen as a strong potential accelerant for AVERI's mission.[2]
One of the most immediate practical challenges for establishing this new auditing layer is the critical shortage of qualified personnel. AI safety auditing requires a rare combination of technical expertise in frontier AI and deep knowledge of governance, security, and compliance. Individuals with this skillset are currently highly sought after and compensated by the very AI companies they would be expected to audit, creating a talent bottleneck. AVERI’s solution is to focus on building interdisciplinary "dream teams," drawing on expertise from traditional audit firms, cybersecurity specialists, AI safety nonprofits, and academia to forge a new professional class capable of carrying out this complex work.[2] The goal is not just to create standards, but to foster an entrepreneurial mindset that can establish an entirely new, highly specialized profession dedicated to verifying the claims and safety of increasingly advanced AI systems.[8]
The launch of AVERI signifies a turning point in the AI governance debate, moving the conversation from purely ethical principles and high-level policy discussions to the concrete, actionable mechanism of mandatory, independent technical scrutiny. By establishing the playbook and developing the foundational infrastructure, the institute aims to transform third-party assurance from a voluntary, ad-hoc exercise into a universal and necessary element of the AI lifecycle. This strategic shift, led by an individual with intimate knowledge of the industry's inner workings, underscores the growing consensus among experts that the path to safe and sustainable AI deployment must include external accountability. AVERI is effectively attempting to construct the missing 'watchdog' infrastructure that has long been a foundational component of other sectors vital to modern society.[2][6]

Sources
Share this article