NO FAKES Act: A Shield Against Deepfakes or a Threat to Internet Freedom?

NO FAKES Act: A Shield Against Deepfakes or a Threat to Internet Freedom?

What started as a narrowly focused bill to stop unauthorized AI-generated deepfakes is now raising alarms among digital rights advocates, developers, and online platforms. The NO FAKES Act — short for Nurture Originals, Foster Art, and Keep Entertainment Safe — has evolved into a sweeping proposal that some fear could fundamentally reshape how the internet works.

Introduced to protect individuals, especially public figures, from AI-generated videos and images created without their consent, the revised legislation now includes broad powers that critics say risk overreach, stifling innovation and free expression in the process.

From Deepfake Protections to Digital Censorship?

The original intent behind the bill was clear: to address the rise of deepfakes that convincingly mimic real people — often with disturbing consequences. But as the bill progressed, its scope expanded. According to the Electronic Frontier Foundation (EFF), the NO FAKES Act has morphed into what they describe as a “federalized image-licensing system” complete with a new censorship infrastructure.

Electronic Frontier Foundation
Defending your rights in the digital world

Key concerns center on provisions that would require platforms to actively prevent the re-uploading of content flagged as violating the law — a mandate that would effectively force the implementation of automated content filters. These systems, such as YouTube’s ContentID, are notoriously error-prone and often flag legitimate, fair-use content.

A Chilling Effect on AI Development

More troubling for the tech sector is that the NO FAKES Act doesn’t just target content — it targets the tools used to create that content. Software platforms and AI development tools could face legal action simply for being capable of generating synthetic media, even if they serve many legitimate functions.

While the bill claims to only apply to tools “primarily designed” for unauthorized replication, that definition leaves plenty of room for interpretation. The EFF and other critics argue this could scare off smaller startups, especially those working on generative AI or creative platforms, before they even get a chance to grow.

The result? A landscape where only well-funded tech giants can afford to navigate legal compliance, while innovation from smaller players is stifled.

The Threat to Anonymous Speech

Buried in the bill is another provision that critics say poses a serious risk to free speech: it would allow private parties to unmask anonymous internet users by obtaining a subpoena—without judicial oversight or hard evidence. This mechanism could be weaponized to expose critics, whistleblowers, or activists based on little more than an allegation.

Digital rights groups warn this could have a chilling effect on online expression, particularly for users in sensitive roles or under threat of retaliation.

Regulation or Overreach?

The irony, critics note, is that Congress already passed the Take It Down Act, which targets non-consensual intimate imagery — one of the most pressing concerns related to deepfakes. Yet instead of assessing how that law performs, legislators are rushing toward more expansive, and potentially more invasive, regulation.

Meanwhile, some major tech companies have remained notably quiet on the NO FAKES Act. Observers suspect that’s no accident. Big Tech often benefits when regulation raises the compliance bar high enough to squeeze out smaller competitors — a pattern seen repeatedly in internet policy.

A Crucial Moment for Digital Rights

The NO FAKES Act raises important questions about balancing the fight against AI misuse with the need to preserve internet freedom. As it moves through Congress, the coming weeks will be pivotal. Whether this legislation protects users or undermines the very openness that made the internet a force for innovation and expression remains to be seen.