AI Impersonates US Secretary of State, Targets Global Leaders

AI-powered imposters infiltrate top-level diplomacy, exposing a critical new threat to national security and digital trust.

July 9, 2025

AI Impersonates US Secretary of State, Targets Global Leaders
An audacious and sophisticated attack utilizing artificial intelligence has targeted high-ranking international and domestic officials, with an unknown actor impersonating U.S. Secretary of State Marco Rubio.[1][2][3] The incident, which came to light through a U.S. State Department cable, involved the use of AI to generate voice messages and mimic the Secretary's writing style in an attempt to contact at least five prominent individuals, including foreign ministers and U.S. officials.[4][5][6] This marks a significant escalation in the malicious use of AI, demonstrating the technology's potential to threaten national security and disrupt diplomatic channels. The State Department has confirmed it is investigating the matter and has issued a warning to all its diplomats, underscoring the gravity of the situation.[7][1][2]
The impersonator employed a multi-pronged approach, leveraging the encrypted messaging app Signal to send fraudulent communications.[4][8] According to a State Department cable dated July 3, the attacker created a Signal account in mid-June with a deceptive display name intended to look official.[4][9] Using this account, the individual contacted at least three foreign ministers, a U.S. governor, and a member of the U.S. Congress.[4][6][7] The actor utilized AI-powered software not only to clone Secretary Rubio's voice in voicemails but also to replicate his writing style in text messages, aiming to manipulate the recipients into divulging sensitive information or granting access to accounts.[10][4][9] While one official described the hoaxes as "not very sophisticated" and ultimately unsuccessful, the very attempt highlights a concerning new frontier in cyber threats.[7][1][2]
This event is not an isolated case but part of a broader, troubling pattern of malicious actors using AI to impersonate senior U.S. government officials.[11][12][13] The FBI issued a warning in the spring about an ongoing campaign involving "vishing" (voice phishing) and "smishing" (SMS phishing) to target officials and their contacts.[10][11][14] In May, a similar incident targeted President Donald Trump's chief of staff, Susie Wiles, where an impersonator contacted senators, governors, and business executives.[4][1][15] These campaigns often aim to establish rapport with targets before sending malicious links or requesting sensitive data.[11][4][12] The increasing availability and sophistication of AI voice-cloning and text-generation tools have made it significantly easier for criminals and state-sponsored actors to create convincing deepfakes with minimal resources, posing a substantial threat.[10][13]
The implications of this AI-driven impersonation campaign extend far beyond the immediate targets, sending ripples through the AI industry and the national security apparatus. The incident serves as a stark warning about the dual-use nature of generative AI technologies. While these tools offer immense benefits, they can also be weaponized to spread disinformation, commit fraud, and, as demonstrated here, potentially interfere with diplomatic relations.[9][16] This "dangerous escalation" raises urgent questions about the need for robust AI governance and the development of reliable deepfake detection technologies.[9] The event underscores the critical need for heightened cybersecurity awareness and verification protocols among government officials and the public alike, as telling the difference between authentic and AI-generated communication becomes increasingly difficult.[10][6] The U.S. government now faces the pressing challenge of adapting its security posture to counter these evolving threats and prevent future incidents that could have serious diplomatic and security ramifications.[9][7]

Sources
Share this article