Skip to content

California Governor Newsom signs first-ever AI chatbot rules for children’s safety

AI Generated Image

California Governor Gavin Newsom unveiled a legislative package focused on child safety in the digital age. He’s placed new regulatory safeguards on social media platforms and AI companion chatbots. This is being done via the Senate Bill 243, which was signed on October 13, 2025. The bill asks that AI-powered companion chatbots do more to protect minors.

What does SB 243 require?

Under SB 243, chatbot operators must notify users (and periodically remind minors) that they’re interacting with AI, not a human, at least once every three hours for minors. It needs to ensure that chatbots are prevented from providing sexually explicit content or suggesting such content to minors.

Subscribe to our

Newsletter

Get weekly updates on the newest crypto stories, case studies and tips right in your mailbox.

The bill asks for protocols to be put in place in order to detect and respond to self-harm or suicidal ideation, including referring at-risk users to crisis hotlines. These chatbot operators must submit safety protocols and, in some cases annual reports on self-harm suppression measures to relevant state offices.

The law also forbids a chatbot from posing as a medical professional or giving direct health diagnoses. The new law, SB 243, will go into effect on January 1, 2026.

The vetoed proposal

Newsom declined to sign Assembly Bill 1064 (LEAD for Kids), labeling its reach too broad, as it would have banned many minors from using chatbots entirely. He also went on to approve a few other bills targeting social media safety.

For platforms like Instagram, TikTok, and Snapchat, they’ve been directed to display health “warnings” to under-18 users. These can be initially skippable warnings, then unskippable after three hours. More importantly, the law will require devices and app stores to collect or verify user age by 2027, with penalties for non-compliance.

As AI continues to grow, U.S. lawmakers are taking action. In June, Senator Cynthia Lummis from Wyoming proposed a bill called the RISE Act. It would protect AI developers from being sued by companies in key industries like healthcare, law, and finance.

The bill got mixed feedback and was sent back to a committee for review.

coinheadlines in your social feed