OpenAI chief Sam Altman warns AI threatens jobs, security; urges human control.
Sam Altman outlines AI's transformative power, from job upheaval to security risks, stressing humanity's choice to shape its future.
July 23, 2025

Sam Altman, the chief executive of OpenAI, has emerged as a central figure in the global conversation about artificial intelligence, issuing stark warnings about its potential to cause widespread job losses and create unprecedented national security threats. In various forums, from congressional hearings to industry conferences, Altman has painted a picture of a future radically reshaped by AI, where the very nature of work and the foundations of global security are fundamentally altered.[1][2] He positions his company not merely as a creator of this powerful technology, but as a crucial partner in navigating its profound societal and geopolitical implications.[3][2] His pronouncements have spurred a mixture of alarm and debate, forcing policymakers, business leaders, and the public to confront the dual-edged nature of a technology poised to redefine human existence.
A primary focus of Altman’s cautionary statements has been the immense impact AI is expected to have on the global workforce. He has been candid in his assessment that some job categories, particularly those involving repetitive, computer-based tasks, will be entirely automated.[4][2] At a Federal Reserve conference, he specifically pointed to customer support roles as being on the verge of disappearing, noting that AI can now handle such tasks flawlessly and efficiently.[2] While he acknowledges that "whole classes of work will disappear," Altman also expresses a degree of optimism, suggesting that this disruption will pave the way for new, currently unimagined jobs.[5][6] He aligns with the view that AI will augment human capabilities, allowing people to achieve more and focus on more creative and strategic endeavors.[5][7] This transition, however, will not be seamless. Altman has stressed the importance of being upfront with the public about these impending changes and has called for a societal debate on how to manage the economic shifts, including a re-evaluation of the social contract between capital and labor.[4] The rapid pace of this transformation necessitates a proactive approach from both industry and government to help the workforce adapt.[4]
Beyond the economic upheaval, Altman has voiced significant concerns about the national security risks posed by increasingly powerful AI systems. He has warned that the technology is rapidly developing, and its potential for misuse by hostile actors is a serious threat.[3][8] During a virtual conversation at Vanderbilt University, he stated that AI will "transform every part of society," with national security being one of the most affected areas.[3] His fears include the possibility of an adversarial nation using AI to launch sophisticated attacks on critical infrastructure, such as power grids or financial systems, or even to develop bioweapons.[8] The proliferation of AI tools that can create convincing deepfakes and voice clones presents another immediate danger, threatening to fuel a "major fraud crisis" and undermine trust in digital communications.[6][8] Altman has highlighted that many current authentication methods, like voiceprints, are already being compromised by AI.[6] He believes U.S. leadership in AI development is critical to ensuring the technology is aligned with democratic values and to staying ahead in the global race, particularly against China.[9][10][3]
In the face of these significant risks, Altman and OpenAI advocate for a collaborative and iterative approach to safety and regulation. While he has testified before the U.S. Congress and called for government oversight, his position has evolved.[11][12] More recently, he has cautioned against heavy-handed or patchwork regulations that could stifle innovation and hinder the United States' ability to compete globally.[9][10] He favors a "light touch" federal framework that allows for speed and flexibility while still establishing important guardrails.[10] OpenAI's own safety strategy involves rigorous testing of its models before release, including months of internal safety work and engagement with external experts.[13][12] The company employs techniques like reinforcement learning from human feedback to align its systems with human values and has stated a commitment to protecting children and preventing the generation of harmful content.[13][12] This approach of "iterative deployment" is central to OpenAI's philosophy, allowing society time to understand and adapt to the technology while enabling developers to identify and mitigate risks in real-world scenarios.[4][3][12]
The gravity of Altman’s warnings, coupled with the explosive growth of AI, has solidified his role as a pivotal, if sometimes controversial, architect of our technological future. His frankness about the potential downsides of AI, from job displacement to existential threats, has been a key factor in galvanizing a global conversation about how to manage this powerful new tool.[11][14] While expressing optimism about AI's potential to solve some of humanity's biggest challenges, like climate change and disease, he consistently underscores that these benefits can only be realized if the risks are proactively managed.[11][14] He emphasizes that the development of AI is inevitable and that society faces a fundamental choice: "Do we shape it or let it shape us?"[3] Through his public engagement and OpenAI's strategic initiatives, Altman is making a determined effort to ensure that humanity actively steers the course of this technological revolution, aiming to secure a future where AI's immense power serves the common good.
Sources
[2]
[4]
[7]
[8]
[10]
[11]
[12]
[13]
[14]