AI Generates Core Idea for Groundbreaking Theoretical Physics Paper
AI steps into conceptual scientific discovery, proposing a physics theory while proving a brilliant but unreliable partner.
December 5, 2025

A new era in scientific inquiry may have dawned with the publication of a theoretical physics paper whose central idea was conceived not by a human mind, but by an artificial intelligence. Physicist Steve Hsu of Michigan State University has published a peer-reviewed article in the journal Physics Letters B, built around a core insight generated by OpenAI's latest large language model, GPT-5.[1][2][3][4] This event marks a significant milestone in the integration of AI into the creative heart of scientific research, moving beyond data analysis to the generation of novel theoretical concepts. While heralding the potential for AI to accelerate discovery, Hsu also offers a stark warning, describing the experience as a collaboration with a "brilliant but unreliable genius" whose profound insights are often interspersed with significant errors.[2]
The research paper, titled “Relativistic Covariance and Nonlinear Quantum Mechanics: Tomonaga-Schwinger Analysis,” delves into foundational questions about the nature of quantum mechanics.[1] Specifically, it investigates whether quantum evolution is perfectly linear, a question with deep implications for our understanding of reality, including the possibility of an Everettian multiverse.[1] The breakthrough moment came when Hsu prompted the AI to compare different approaches to nonlinearity in quantum mechanics. GPT-5 proposed using the Tomonaga-Schwinger formulation of quantum field theory to demonstrate why certain modifications to standard quantum mechanics would violate relativistic covariance, a fundamental principle of physics.[1] This specific, novel suggestion from the AI formed the central thesis of Hsu's subsequent paper, which derives new mathematical conditions that any such modification must satisfy to remain consistent with relativity.[1]
The development signifies a potential shift in the role of AI in science, from a tool for computation and data processing to a genuine partner in conceptualization.[5][1] While AI has previously been instrumental in fields like protein folding and analyzing massive datasets, its capacity to generate a core theoretical insight in fundamental physics represents a new frontier.[1] This advancement is part of a broader trend where advanced AI models are increasingly seen as collaborators that can shorten research workflows and expand the scope of exploration for experts.[6][5][7] OpenAI's own reports on GPT-5 highlight its ability to assist in complex tasks like generating mathematical proofs, conducting powerful conceptual literature searches across disciplines, and proposing novel hypotheses for validation.[6][5][7] These systems are not autonomous researchers, but in the hands of experts, they can significantly accelerate the path to discovery by synthesizing knowledge in novel ways and identifying connections that might have been missed.[6]
However, the collaboration with artificial intelligence is fraught with challenges, a reality Hsu has been keen to emphasize. He characterizes the interaction with large language models as working with an entity capable of deep insights but also prone to simple calculation errors and "incorrect conceptual leaps that are superficially plausible."[1] These subtle, more profound errors pose a significant risk, as they can lead even expert researchers down fruitless paths, wasting considerable time and effort.[1][2] To mitigate this unreliability, Hsu developed a systematic "Generator-Verifier" protocol.[1][2] This method involves a structured process where one AI model instance generates an idea or a step in a proof, and a separate, independent instance is tasked with verifying it, a process designed to significantly reduce the rate of hallucination and error compared to single-pass generation.[1] For his research, Hsu utilized several top-tier models, including GPT-5, Gemini 2.5-Pro, and Qwen-Max, underscoring that expert human oversight remains an indispensable final safety net.[2]
The implications of this AI-generated insight extend far beyond the realm of theoretical physics, posing fundamental questions for the entire scientific community and the rapidly evolving AI industry.[8][9] As AI's role expands from assistant to ideator, it will necessitate a re-evaluation of research methodologies, academic credit, and the very nature of scientific discovery.[9][10] The event demonstrates a powerful new application for large language models, suggesting a future where they could help tackle long-standing scientific challenges that have been constrained by human cognitive limits.[5][3] Yet, it also brings the limitations and dangers of these systems into sharp focus.[6][11] OpenAI has itself acknowledged that GPT-5 can hallucinate plausible-looking citations and proofs and can follow unproductive lines of reasoning if not expertly guided.[6][11] Hsu's experience suggests that harnessing the creative potential of AI will require not just powerful models, but also rigorous new protocols and a deep-seated skepticism, ensuring that human expertise remains the ultimate arbiter of scientific truth. The collaboration between physicist and AI may have yielded a significant result, but it also serves as a crucial case study in the opportunities and perils of navigating science in the age of artificial intelligence.
Sources
[4]
[6]
[7]
[8]
[10]
[11]