UN Warns: AI Empowers Terrorists with Autonomous Cars, Killer Drones

The UN warns AI's "dark side" could empower terrorists to weaponize self-driving cars and unleash deadly drones.

June 23, 2025

UN Warns: AI Empowers Terrorists with Autonomous Cars, Killer Drones
A recent United Nations report has cast a stark light on what one official calls "the dark side of AI," warning that terrorists could weaponize emerging technologies like self-driving cars and autonomous drones to carry out deadly attacks.[1][2] The report, titled "Algorithms and Terrorism: The Malicious Use of Artificial Intelligence for Terrorist Purposes," jointly published by the UN's Interregional Crime and Justice Research Institute (UNICRI) and the UN's Office of Counter-Terrorism, serves as an "early warning" to the global community about the potential for extremist groups to exploit artificial intelligence.[1][3] While acknowledging the immense benefits of AI in fields like medicine and transportation, the document meticulously outlines how these same advancements could be repurposed for nefarious ends, lowering the technical barriers for staging sophisticated and lethal assaults.[1][4]
The report highlights the acute threat posed by the hijacking of autonomous vehicles, a technology rapidly moving from concept to reality.[3][4] Terrorist groups have a long history of using vehicles in attacks, and the advent of self-driving cars presents a chilling new frontier for such tactics.[3][4] The UN warns that increased autonomy in cars could be a welcome development for terrorist organizations, enabling them to execute attacks remotely without needing a suicide operative or risking the capture of a follower.[3] This capability fundamentally alters the risk calculus for terrorist groups, potentially making vehicle-ramming attacks more frequent and harder to prevent. The FBI has also previously identified autonomous vehicles as "game-changing," noting they could be used to conduct attacks while passengers are freed up to engage in other activities, such as firing weapons.[5] The concern is not merely theoretical; security firms have reported evidence of groups like ISIS exploring the use of self-driving cars as replacements for suicide bombers.[5]
Beyond the immediate threat of weaponized cars, the UN report delves into the terrifying potential of lethal autonomous weapon systems (LAWS), colloquially known as "slaughterbots."[3][6] These are weapons, including advanced drones, that can independently search for, identify, and eliminate human targets without direct human control.[7][8] The technology, once the realm of science fiction, is now a tangible reality, with a 2021 UN report documenting the use of an autonomous weapon system in Libya.[8] The proliferation of such weapons is a major concern, as they do not require expensive or rare materials, making them relatively cheap to mass-produce.[8] Experts fear that once major military powers begin manufacturing these systems, they will inevitably appear on the black market and fall into the hands of terrorists, dictators, and warlords.[8] The use of AI in this context removes the need for a terrorist to possess detailed operational materials like maps or schematics; they could simply instruct the AI to source this information online, making it harder for law enforcement to intervene based on possession of incriminating data.[6]
The implications of the UN's warnings for the artificial intelligence industry are profound, placing a heavy burden of responsibility on developers and regulators. The report underscores that terrorists have consistently been early adopters of new and often under-regulated technologies, and AI is no exception.[2] This reality necessitates a proactive and international approach to governance to prevent malicious actors from exploiting regulatory gaps.[2] The UN Secretary-General has repeatedly called for a global ban on lethal autonomous weapons, deeming them "politically unacceptable and morally repugnant."[7][9] The international community is being urged to establish a legally binding framework by 2026 to prohibit LAWS that operate without meaningful human control.[7][9] The challenge lies in creating resilient governing structures that can adapt to the rapid pace of AI development and effectively mitigate the risks of its malicious use.[4] This includes addressing the "black box" problem of some AI systems, where even their creators cannot fully explain their decision-making processes, creating a significant accountability gap.[10]
In conclusion, the United Nations' report serves as a critical and urgent call to action. The dual-use nature of artificial intelligence, offering immense societal benefits while simultaneously presenting grave security threats, demands a concerted global response.[11] The prospect of terrorists remotely turning a self-driving car into a weapon or deploying swarms of autonomous drones for targeted killings is no longer a distant hypothetical.[3][8] As AI becomes more widespread and accessible, the question is shifting from "if" to "when" it will become a standard tool in the terrorist arsenal.[1][4] The international community, from governments and law enforcement to the private sector companies developing this technology, now faces the immense challenge of ensuring that the "dark side of AI" does not overshadow its promise for a better future.[2][12] The path forward requires not only robust regulation and international treaties but also a fundamental commitment to embedding safety, ethics, and human control into the very fabric of artificial intelligence development.

Research Queries Used
UN report terrorist threat self-driving cars
UNICRI report artificial intelligence terrorism
the dark side of AI UN report
slaughterbots terrorism
autonomous weapons systems UN warning
Share this article