
A federal judge in Sacramento struck down a California law restricting AI-generated political deepfakes, ruling it conflicts with Section 230 and signaling broader legal vulnerability for related speech regulations.
At a Glance
- A federal judge in Sacramento invalidated California’s deepfake-election law (AB 2655 of 2024).
- Ruling centered on conflict with the Communications Decency Act’s Section 230.
- The judge paused evaluation of First Amendment claims, focusing on federal preemption grounds.
- He expressed intent to also overturn a companion law requiring labeling of AI-generated campaign materials (AB 2839).
- The case was brought by video creator Christopher Kohls and later joined by X (formerly Twitter), Babylon Bee, and Rumble.
Legal Collision: State vs. Federal Authority
In response to a manipulated video showing then–Vice President Kamala Harris describing herself as the “ultimate diversity hire,” California enacted legislation in 2024 to combat deceptive AI-generated political content. One law—AB 2655—barred platforms from hosting such deepfakes during election periods, while another—AB 2839—mandated labels on digitally altered campaign materials. A lawsuit by the video’s creator, Christopher Kohls, later joined by X, Babylon Bee, and Rumble, challenged these measures. On August 5, 2025, Judge John Mendez of the U.S. District Court for the Eastern District of California struck down AB 2655, ruling it conflicted with Section 230’s protections shielding platforms from third-party content liability. He did not address the First Amendment arguments, deeming them unnecessary to decide the case.
Watch now: Judge HANDS Musk, X A WIN Over AI-Altered Election Deepfake Law · YouTube
Broader Implications: Labeling Law in the Crosshairs
Judge Mendez also signaled strong skepticism towards AB 2839, which would require platforms to apply labels to deepfakes and allow individuals to sue over deceptive ads. Calling it “a censorship law” and “overly broad,” Mendez indicated he would strike it down, too, emphasizing constitutional concerns over free expression. The judge’s reasoning suggested that forcing platforms to police political speech at scale risks conflicting with long-established federal protections. If AB 2839 is struck, California’s ability to regulate synthetic media in elections could be effectively nullified.
Political Flashpoint: Enforcement and Free Speech
Governor Gavin Newsom had signed the laws in reaction to Musk’s posting of the manipulated video. The state framed the measures as essential to safeguarding election integrity, especially after 2020 saw widespread concern over misinformation campaigns.
Critics countered that the bills were designed too broadly, capturing satire, parody, and even legitimate political critique under the same enforcement net. The plaintiffs argued that compelled moderation and labeling would chill lawful expression, echoing a line of cases in which federal courts limited states’ attempts to regulate online speech. The court’s decision underscores the enduring dominance of Section 230, a law originally enacted in 1996 to shield internet platforms from liability over user content.
California’s defeat may embolden challenges in other jurisdictions. States such as Texas and Minnesota have experimented with similar election-related restrictions on AI-generated materials. Several of these laws are already facing litigation, suggesting that a broader wave of judicial review may soon define the limits of state action in this area.
National and Global Dimensions
Beyond California, Congress has debated bipartisan proposals that would set federal standards for AI transparency in political ads. These bills include disclosure requirements for campaigns using synthetic images or voices, though none have advanced to a full floor vote. The Federal Election Commission has also opened a public inquiry into how deepfakes could influence the 2026 midterm cycle, underscoring national concern.
Internationally, the European Union has taken a different approach, incorporating deepfake disclosure mandates into its sweeping Digital Services Act. While U.S. courts continue to stress constitutional limits on compelled speech, European regulators have leaned heavily toward consumer protection and mandatory transparency. This divergence may create compliance headaches for global platforms that operate across both legal systems.
What Comes Next?
Although the ruling provides short-term relief for platforms like X and satirical creators, it sets up tension between state-level efforts to regulate AI-driven misinformation and federal protections for platforms and speech. As Judge Mendez continues to weigh AB 2839, California may face further legal defeats in its bid to regulate the deepfake landscape. The broader legal community is watching closely: this case could shape future legislation in multiple states and set precedent for digital speech governance. Experts note that the rapid growth of generative AI tools makes the stakes unusually high. If state laws continue to be struck down, only a federal standard—or direct congressional amendment to Section 230—may provide lasting clarity. For now, the decision marks a major victory for Elon Musk and his allies, while leaving unresolved the challenge of balancing free speech with election integrity in the age of synthetic media.
Sources


















