NO FAKES Act Ignites Fiery Debate Over AI Deepfakes and Free Speech
The NO FAKES Act targets AI deepfakes, igniting a fierce debate pitting personal likeness against free expression and innovation.
June 25, 2025

A legislative proposal aimed at curbing the misuse of artificial intelligence to create unauthorized digital replicas of individuals has ignited a fiery debate, pitting the need to protect personal likeness against the principles of free expression and innovation on the internet. The Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, a bipartisan bill, seeks to establish federal protection over an individual's voice and visual likeness.[1][2] Proponents, including many artists and actors, argue it is a necessary safeguard in an era of rampant deepfakes, while digital rights advocates warn its broad language could lead to widespread censorship and stifle technological advancement.[3][4] The core of the controversy lies in how the bill frames this new protection: not as a privacy right, but as a new form of intellectual property.[5]
The NO FAKES Act would grant every individual the exclusive right to authorize the use of their voice or likeness in a "digital replica," defined as a newly created, computer-generated, highly realistic electronic representation.[2][6] This right would hold individuals and companies liable for creating or distributing unauthorized replicas and would also extend liability to online platforms that host such content with knowledge of its unauthorized nature.[7] The legislation is intended to create a unified federal standard, replacing the current patchwork of state-level right of publicity laws that have proven ineffective against the borderless nature of online content.[8][9] Supporters, including major entertainment industry unions and some large technology companies like YouTube, point to high-profile examples of misuse, such as an AI-generated song mimicking the voices of Drake and The Weeknd and a fake Tom Hanks promoting a dental plan, as clear evidence that such protections are urgently needed.[7][4] They argue the bill provides a clear path for victims to have unauthorized deepfakes removed through a notice-and-takedown system, similar to the process for copyright infringement under the Digital Millennium Copyright Act (DMCA).[4][6]
However, critics, led by organizations like the Electronic Frontier Foundation (EFF), contend that the bill's approach is fundamentally flawed and dangerous.[5] By establishing a new intellectual property right, they argue, the act creates a system ripe for abuse that prioritizes monetization over protection.[5][10] This framework could incentivize a market for licensing the likenesses of deceased celebrities and lead to a surge in litigation.[5] A primary concern is that the bill's language is overly broad, potentially chilling a wide range of First Amendment-protected speech, including parody, satire, criticism, and commentary.[1][11] While the bill includes exemptions for such uses, critics question how these will be applied in practice and who will bear the burden of proof, fearing that the threat of costly legal battles will lead to self-censorship.[1][12]
The implications for the broader internet ecosystem are significant. Digital rights advocates warn that the mandated takedown system could compel online platforms to err on the side of removal, leading to the over-censorship of legitimate content to avoid liability.[1][3] The revised version of the bill intensifies these concerns, requiring online services to use digital fingerprinting technologies to block re-uploads of flagged content and even targeting the software tools that can be used to create replicas.[8][3][10] This could stifle innovation in AI and give rights-holders a de facto veto over new technologies.[3] Furthermore, provisions that allow for the easy unmasking of anonymous users through subpoenas could have a chilling effect on free speech, and the high costs of compliance could entrench the market power of large tech companies at the expense of startups.[5][10] Critics argue that existing laws, such as the recently passed TAKE IT DOWN Act which targets non-consensual intimate imagery, already address the most harmful forms of deepfakes, making the sweeping scope of the NO FAKES Act unnecessary and disproportionate.[5]
In conclusion, the NO FAKES Act represents a critical juncture in the regulation of artificial intelligence and online content. It attempts to provide a much-needed remedy for individuals whose likenesses are stolen and misused, offering a streamlined process for redress.[9][4] Yet, in doing so, it raises profound questions about the future of online expression, platform liability, and technological innovation. The debate centers on a fundamental disagreement: whether the harms of deepfakes are best addressed by creating a new, licensable property right over one's image, or through a more narrowly tailored approach focused on privacy and preventing specific, malicious uses.[5][11] As lawmakers consider the bill, they face the complex challenge of balancing the legitimate rights of individuals to control their digital selves with the foundational principles of a free and open internet, where creativity, commentary, and parody can flourish.[8] The outcome will have lasting consequences for artists, technology developers, online platforms, and every user of the digital world.
Research Queries Used
NO FAKES Act explained
NO FAKES Act criticism and support
NO FAKES Act full text and analysis
NO FAKES Act impact on AI and internet freedom
Sources
[2]
[3]
[4]
[7]
[10]
[11]
[12]