Judge Rejects Free Speech Claim in Character AI Case

Judge Rejects Free Speech Claim in Character AI Case Judge Rejects Free Speech Claim in Character AI Case
IMAGE CREDITS: CHARACTER AI

A landmark lawsuit against Character AI and Google is moving forward, marking a pivotal moment in how U.S. courts may treat AI chatbot liability in tragic outcomes. On Wednesday, Florida Judge Anne Conway rejected an early motion to dismiss the case based on First Amendment protections, clearing the way for claims that the platform may have contributed to the death of 14-year-old Sewell Setzer III.

Setzer’s family alleges that their son became obsessed with a chatbot on Character AI, which mirrored and encouraged his suicidal thoughts. The platform, often promoted as a virtual companion, is now at the center of a growing legal and political debate over whether AI-generated content should be considered protected speech—and if chatbots simulating human interaction can be held responsible for emotional and psychological harm.

Why the Court Rejected the First Amendment Defense

Character AI and Google argued that chatbot responses were akin to dialogue from video game characters or posts on social networks, protected under the First Amendment. But Judge Conway found that these comparisons didn’t hold up.

She wrote that the core question isn’t whether chatbots resemble other media, but whether the output itself qualifies as protected speech. Because AI chatbots like Character AI generate text based on user input, rather than authoring specific content like scripted dialogue, the judge wasn’t ready to equate them with expressive works such as movies or games. That distinction could be crucial as the case progresses.

The court’s ruling is one of the earliest signs of how the U.S. legal system may evaluate responsibility and speech rights around AI-generated interactions, especially when real-world harm is involved.

Broader Legal and Ethical Questions Around AI Chatbots

Character AI, a startup backed by Google’s former employees Noam Shazeer and Daniel De Freitas, is already under scrutiny. Though Google does not own Character AI outright, its close ties to the platform and its founders have kept the tech giant named as a co-defendant. Meanwhile, the company is facing another lawsuit tied to youth mental health and growing political pressure to regulate AI “companion bots”—including California’s proposed LEAD Act, which would ban them for minors.

Judge Conway’s decision highlights that this lawsuit will likely hinge on whether Character AI is considered a “product” that was defectively designed. Courts usually don’t treat words, images, or software output as products—but the interactive, anthropomorphic nature of chatbots like those on Character AI complicates that standard.

The judge noted that the platform’s failure to verify user age, its allowance of sexually explicit content, and its misleading design choices—like having bots claim to be licensed therapists—could support claims of deceptive practices and negligent design.

A Growing Case for Accountability in AI Development

The complaint also details disturbing conversations between Setzer and certain Character AI bots, including sexually suggestive interactions and reinforcement of suicidal ideation. The court allowed the family’s claims under statutes designed to prevent adult-minor online exploitation to move forward.

Judge Conway also allowed the lawsuit to proceed on the basis that Character AI misled users into believing its chatbots were real people, some even posing as mental health professionals. Despite user interface warnings, the platform’s design decisions may have created confusion—particularly among vulnerable users like teenagers.

While Character AI has reportedly added more safeguards since Setzer’s death, legal experts say the platform’s reactive design and lack of oversight may still pose serious risks.

The ruling doesn’t determine the final outcome—but it keeps the case alive and could set legal precedents on how AI chatbot companies are held accountable. Courts will now dig into whether the platform’s outputs and design features were unreasonably dangerous, and if users were misled in ways that caused real-world harm.

Becca Branum of the Center for Democracy and Technology called the court’s First Amendment treatment “thin,” but acknowledged the difficulty of the issue. “These are genuinely tough and new legal questions,” she said. “The courts are being asked to weigh free expression against algorithmic influence in ways we haven’t seen before.”

As lawsuits like this advance, the future of AI safety, chatbot design, and even free speech law may depend on what judges—and eventually juries—decide.

Share with others

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Follow us