On Oct. 13, 2025, California Governor Gavin Newsom signed into law Senate Bill 243 – Companion Chatbots. SB 243, authored by Senator Steve Padilla, requires operators of companion chatbot platforms to notify users that the chatbot is AI, provide specific disclosures to minors, and restrict harmful content. The law also includes a private right of action.
The law is in response to mounting public concerns about children’s online interactions with companion chatbots. In his press release following the signing of multiple children’s online safety bills, Newsom highlighted this public concern. “Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids. We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability. We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not for sale.”
The law goes into effect January 1, 2026, with reporting requirements starting on July 7, 2027.
Scope
This law applies to operators, which is defined as a person who makes a companion chatbot platform available to a user in California. The law defines companion chatbots as “an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions.”
The law excludes the following from the definition of “companion chatbot”:
A bot that is used only for customer service, a business’ operational purposes, productivity and analysis related to source information, internal research, or technical assistance.
A bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game.
A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user.
Key Provisions
Notice and Disclosure Obligations
The law outlines specific disclosure requirements for both general users and minors.
General Users
The law requires that if a reasonable person would be misled to believe that they are interacting with a human, operators must issue a clear and conspicuous notification that the companion chatbot is artificially generated and not human.
Minors
For users that operators know are minors they must not only disclose that the user is interacting with artificial intelligence, but they must also provide by default a clear and conspicuous notification to the user at least every three hours for continuing companion chatbot interactions that remind the user to take a break and that the chatbot is artificially generated and not human.
Additionally, the law requires operators to disclose, on the application, the browser, or any other format through which users can access the chatbot platform, that the companion chatbot may not be suitable for some minors.
Safety Protocols and Transparency Measures
In addition to its disclosure requirements, the law mandates that operators implement, and publish on its website, safety protocols and transparency measures.
Under the law, companion chatbots may not engage with users unless the operator maintains a protocol that:
prevents the production of content related to suicidal ideation, suicide, or self-harm
provides notice to users referring them to crisis services, such as a suicide hotline or crisis text line, if they express suicidal thoughts or self-harm.
Content Restrictions for Minors
The law requires operators to implement reasonable measures to prevent companion chatbots from producing visual material depicting sexually explicit conduct or from directly stating that a minor should engage in such conduct.
Reporting Requirements
Effective July 1, 2027, operators must submit an annual report to California’s Office of Suicide Prevention detailing:
The number of times they have issued a crisis service provider referral notification in the preceding calendar year.
Protocols put in place to detect, remove, and respond to instances of suicidal ideation* by users.
Protocols put in place to prohibit a companion chatbot response about suicidal ideation* or actions with the user.
*The law requires that suicidal ideation be measured using evidence-based methods.
The law specifies that such reports must exclude any user identifiers or personal information. Once compiled, California’s Office of Suicide Prevention will publish data from this report on its website.
Private Right of Action
The law creates a private right of action for any person who suffers injury in fact as a result of a violation of the law and allows them to pursue:
Injunctive relief.
Damages in an amount equal to the greater of actual damages or one thousand dollars ($1,000) per violation.
Reasonable attorney’s fees and costs.
Key Takeaways
Companion chatbot operators should develop protocols to ensure compliance with the law, including:
providing required user notification and disclosures,
identifying and responding to user expressions of self harm,
identifying and restricting content in scope, and
compiling and submitted required reporting.
This legislation was signed alongside a broader package of child online safety laws, including the Digital Age Assurance Act (AB 1043), which establishes new online age-assurance requirements. Together, these measures contribute to a growing framework of children’s online safety laws in California.
See our blog post on the Digital Age Assurance Act.
Clara De Abreu E Souza is an Associate at Hintze Law PLLC. She has experience with artificial intelligence, data privacy, and the regulation of emerging technologies, including evolving state and federal privacy laws, algorithmic accountability, and health data governance.
Hintze Law PLLC is a Chambers-ranked and Legal 500-recognized, boutique law firm that provides counseling exclusively on global privacy, data security, and AI law. Its attorneys and data consultants support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy, data security, and AI law.