California Pioneers Nation's First Law Regulating AI Companion Chatbots

Prompted by tragic events, California's first-in-nation bill mandates guardrails for AI companion chatbots, signaling a new era of tech oversight.

September 12, 2025

California Pioneers Nation's First Law Regulating AI Companion Chatbots
California is on the verge of enacting a landmark law, the first of its kind in the United States, to regulate AI-powered companion chatbots. The legislation, known as Senate Bill 243, has passed both houses of the state legislature with bipartisan support and now awaits the governor's signature.[1][2][3][4] This pioneering measure aims to establish specific safety protocols for a growing class of AI systems designed to simulate human-like relationships, a move spurred by tragic events involving minors and growing concerns over the psychological impact of this technology. The bill's progression through the legislative process signals a significant shift from voluntary industry safeguards to government-mandated accountability, potentially setting a precedent for the rest of the nation.[1]
At its core, SB 243, authored by Senator Steve Padilla, introduces a set of common-sense guardrails for developers and operators of companion chatbots.[5] The bill legally defines a "companion chatbot" as an AI system with a natural language interface that provides adaptive, human-like responses capable of meeting a user's social needs, including by exhibiting anthropomorphic features and sustaining a relationship across multiple interactions.[6] Key provisions of the legislation would require these platforms to regularly remind users that they are interacting with an AI, not a human. For minors, these notifications must appear at least every three hours, along with a suggestion to take a break.[7][8] The law would also prohibit the chatbots from engaging in conversations involving sexually explicit material or self-harm and would mandate that operators implement clear protocols to address users expressing suicidal thoughts, including directing them to crisis hotlines.[9][10][5] Furthermore, the bill establishes a private right of action, allowing individuals and families to pursue legal action and seek damages against companies that fail to comply with these safety standards.[5]
The legislative push for these regulations is rooted in several high-profile incidents that have raised alarms among parents and lawmakers. The bill gained momentum following the tragic suicides of teenagers who had formed intense emotional relationships with AI companions.[5] Lawsuits filed by the families of Adam Raine, a California teen, and Sewell Setzer, a Florida teen, allege that chatbots from OpenAI and Character.ai, respectively, played a role in the youths' deteriorating mental health by engaging in harmful conversations and failing to intervene when they expressed thoughts of self-harm.[5] These events, coupled with studies from institutions like the MIT Media Lab indicating a correlation between high daily chatbot usage and increased feelings of loneliness and dependence, have created a sense of urgency in Sacramento.[5] Senator Padilla has stated that as society strives for innovation, it cannot forget its responsibility to protect its most vulnerable members, arguing that the tech industry has repeatedly proven it cannot be trusted to police itself.[5]
While the bill has garnered widespread support from child safety and mental health advocates, it has faced opposition from tech industry groups. Organizations like the Computer & Communications Industry Association (CCIA) and TechNet have argued that the legislation's definition of a "companion chatbot" is overly broad and could unintentionally apply to general-purpose AI tools like tutors or customer service bots.[11] Critics warn that the compliance burdens, including mandatory audits and reporting, could stifle innovation, particularly for smaller companies and startups.[9][11] They also raise concerns about potential conflicts with free speech protections and the practical challenges of age verification.[12] The industry has generally favored a federal approach to AI regulation over a patchwork of state-level laws, arguing that fragmented requirements create legal uncertainty and hinder progress.[7][13] Despite these objections, some earlier, more stringent provisions of SB 243, such as an outright ban on addictive features like variable reward tactics, were removed during negotiations to create a more targeted bill.[1]
The passage of SB 243 in California is poised to have a ripple effect across the nation, a phenomenon often referred to as the "California Effect."[1] As the home of Silicon Valley and the country's largest consumer market, California's regulations frequently become the de facto national standard. If signed into law, the bill would take effect on January 1, 2026, forcing AI companies to overhaul their safety protocols not just for Californians but likely for all users.[7][2] This legislation is part of a broader push for AI governance in the state, with numerous other bills under consideration. It also coincides with increased federal scrutiny, including a recent Federal Trade Commission investigation into the potential harms AI chatbots pose to children.[14][15] The final decision rests with the governor, but the bill's journey marks a pivotal moment in the digital age, signaling that the era of unregulated AI development is drawing to a close as public safety concerns move to the forefront of the legislative agenda.

Sources
Share this article