California Governor Gavin Newsom signed SB 243 into law on Monday, establishing the nation's first state-level regulation specifically for AI companion chatbots. This legislation mandates specific safety protocols for operators, holding companies legally accountable for compliance with the new standards.
The bill, introduced by state senators Steve Padilla and Josh Becker, aims to protect children and vulnerable users from potential harms associated with AI companion chatbot use. Its development gained momentum following reported incidents, including the death of teenager Adam Raine, whose family alleges suicidal conversations with OpenAI’s ChatGPT. Further impetus stemmed from leaked internal documents reportedly showing Meta’s chatbots engaging in "romantic" and "sensual" interactions with minors, and a lawsuit against Character AI by a Colorado family citing problematic conversations with a chatbot that preceded their daughter's suicide.
Effective January 1, 2026, SB 243 requires AI companion chatbot companies to implement age verification and issue warnings regarding social media and companion chatbot use. It further mandates the establishment of protocols to address suicide and self-harm, which must be shared with the state’s Department of Public Health, alongside statistics on crisis center prevention notifications. Additionally, the law dictates that platforms must clearly disclose that interactions are artificially generated, prohibit chatbots from representing themselves as healthcare professionals, offer break reminders to minors, and prevent the generation of sexually explicit images for underage users.
The legislation applies to a broad spectrum of AI developers, including major firms like Meta and OpenAI, as well as specialized companion chatbot companies such as Character AI and Replika. The law also introduces stronger penalties, up to $250,000 per offense, for those who profit from illegal deepfakes. Senator Padilla characterized the bill as "a step in the right direction" for implementing guardrails on "an incredibly powerful technology."
This initiative marks California's second significant AI regulation in recent weeks, following the September 29 signing of SB 53, which established transparency requirements and whistleblower protections for large AI laboratories. Other U.S. states, including Illinois, Nevada, and Utah, have also enacted laws restricting or banning the use of AI chatbots as substitutes for licensed mental healthcare. Some companies, including OpenAI and Character AI, have already begun implementing certain safeguards like parental controls and disclaimers in advance of broader regulatory mandates.