TECH

Chatbot Platform Character AI Asserts First Amendment Protection in Motion to Dismiss

Character AI, a platform that allows users to participate in roleplay with AI chatbots, has submitted a motion to dismiss a lawsuit filed by the parent of a teenager who took his own life, allegedly after becoming overly engaged with the company’s technology.

In October, Megan Garcia initiated legal action against Character AI in the U.S. District Court for the Middle District of Florida, Orlando Division, following the death of her son, Sewell Setzer III. Garcia claims that her 14-year-old son formed a strong emotional attachment to a Character AI chatbot named “Dany,” which he communicated with incessantly, leading him to distance himself from reality.

After Setzer’s passing, Character AI announced plans to implement various new safety features aimed at enhancing the detection, response, and intervention protocols related to conversations that breach its terms of service. However, Garcia is advocating for additional safeguards, including potential alterations that could limit chatbots on Character AI from sharing stories and personal experiences.

In the motion to dismiss, Character AI’s legal team argues that the platform is shielded from liability under the First Amendment, similar to how computer code is treated. While the motion may not convince a judge, and Character AI’s legal positions could evolve as proceedings continue, it possibly outlines the early aspects of Character AI’s defense strategy.

“The First Amendment prevents tort liability against media and tech firms for speech that is allegedly harmful, including speech purportedly leading to suicide,” the motion states. “The only distinction between this case and previous ones is that some of the speech here involves AI. Yet, the context of the expressive speech — be it a chat with an AI chatbot or interaction with a video game character — does not alter the First Amendment evaluation.”

Importantly, Character AI’s representatives are not asserting the company’s First Amendment rights. Instead, the motion argues that the First Amendment rights of Character AI’s users would be infringed upon if the lawsuit were to succeed.

The motion does not consider whether Character AI could be protected under Section 230 of the Communications Decency Act, the federal law that shields social media and online platforms from liability for third-party content. The authors of the law have suggested that Section 230 may not apply to outputs from AI systems like Character AI’s chatbots, but this remains a contentious legal issue.

Character AI’s counsel also contends that Garcia’s true aim is to “shut down” Character AI and prompt legislation regulating such technologies. They assert that if the plaintiffs were successful, it would create a “chilling effect” not only on Character AI but also across the burgeoning generative AI sector.

“Beyond counsel’s stated intention to ‘shut down’ Character AI, [their complaint] seeks significant changes that would drastically restrict the type and amount of speech on the platform,” the filing asserts. “These alterations would severely limit Character AI’s millions of users in generating and engaging in conversations with characters.”

The lawsuit, which also includes Character AI’s corporate benefactor Alphabet as a defendant, is one of several legal challenges that Character AI faces regarding the interaction of minors with AI-generated content on its platform. Other lawsuits allege that Character AI exposed a nine-year-old to “hypersexualized content” and encouraged self-harm in a 17-year-old user.

In December, Texas Attorney General Ken Paxton announced an investigation into Character AI and 14 additional tech companies for possible violations of the state’s online privacy and safety regulations for children. “These investigations are crucial in ensuring that social media and AI firms adhere to laws intended to protect children from exploitation and harm,” Paxton stated in a press release.

Character AI is part of a rapidly expanding industry of AI companionship applications, the mental health impacts of which remain largely unexamined. Some experts have voiced concerns that these apps could heighten feelings of loneliness and anxiety.

Founded in 2021 by Google AI researcher Noam Shazeer, Character AI reportedly received a $2.7 billion investment from Google in a “reverse acquihire” deal. The company asserts that it continues to implement measures to enhance safety and moderation. In December, Character AI introduced new safety tools, a distinct AI model for teenagers, restrictions on sensitive content, and more visible disclaimers informing users that its AI characters are not real individuals.

Character AI has undergone several personnel transitions after Shazeer and co-founder Daniel De Freitas left for Google. The platform recently appointed a former YouTube executive, Erin Teague, as chief product officer, and Dominic Perella, previously Character AI’s general counsel, has taken on the role of interim CEO.

Character AI is also testing web-based games as part of efforts to enhance user engagement and retention.

TechCrunch offers an AI-focused newsletter! Subscribe here to receive it in your inbox every Wednesday.

Leave a Reply

Your email address will not be published. Required fields are marked *