October 15, 2025 | 11:38 am

TEMPO.CO, Jakarta - California has officially become the first U.S. state to regulate the use of AI-based chatbots after Governor Gavin Newsom signed Senate Bill 243 into law last Monday.
The regulation requires chatbot developers to implement security protocols to protect children and vulnerable users from the potential dangers of using AI companion chatbots.
The policy applies to various companies, from tech giants such as Meta and OpenAI to startups such as Character AI and Replika. They will be legally responsible if the developed chatbots fail to meet the security standards set by law.
SB 243 was introduced in January by state senators Steve Padilla and Josh Becker. Support for the bill increased following the suicide of a teenager named Adam Raine, who took his own life after having a suicidal conversation with OpenAI's ChatGPT.
The regulation also responds to leaked internal documents from Meta showing that the company's chatbots were allowed to engage in romantic and sensual conversations with children. In addition, a family in Colorado sued Character AI after their 13-year-old daughter committed suicide following an interaction with the company's chatbot.
"Emerging technology like chatbots and social media can inspire, educate, and connect — but without real guardrails, technology can also exploit, mislead, and endanger our kids," Newsom said in a written statement, quoted from a Tech Crunch report on October 13, 2025.
"We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability," he continued.
SB 243 will take effect on January 1, 2026. Companies are required to implement age verification, provide warnings regarding the use of social media and companion chatbots, and provide protocols for handling suicide issues and self-harming behavior.
They must also report this data to the California Department of Public Health. The regulation also imposes fines of up to US$250,000 per violation for those who profit from the illegal distribution of deepfake.
In addition, companies must ensure that users are aware that all conversations are generated by AI and prohibit chatbots from masquerading as healthcare professionals. Platforms are also required to provide break reminders for child users and prevent them from accessing explicit content created by chatbots.
Several major companies have begun taking preventive measures. For example, OpenAI has introduced parental control features, self-harm behavior detection systems, and content protection for child users.
Meanwhile, Replika, which is only intended for users over 18, claims to have content filtering systems and guidelines that direct users to trusted crisis resources. Character AI also states that their chatbots include a warning that all conversations are fictional and generated by AI.
SB 243 becomes the second AI regulation passed in California in recent weeks. Earlier, on September 29, Governor Newsom signed SB 53, which requires major AI companies such as OpenAI, Anthropic, Meta, and Google DeepMind to be more transparent about security protocols and to provide protection for whistleblowers.
Disclaimer: If you or someone you know is experiencing suicidal thoughts or a crisis, please reach out to the nearest health institution and/or relevant authorities. The International Association for Suicide Prevention offers an exhaustive list of global helplines to assist you in times of crisis at https://www.iasp.info/crisis-centres-helplines/.
If you’re in Indonesia, you can call Pulih Foundation at (021) 78842580, the Health Ministry's Mental Health hotline at (021) 500454, and the Jangan Bunuh Diri NGO hotline at (021) 9696 9293 for mental crisis assistance and/or suicide prevention measures. Into the Light Indonesia also has information about mental health and suicide prevention, as well as who to contact at https://www.intothelightid.org/tentang-bunuh-diri/hotline-dan-konseling/
Editor’s Choice: OpenAI Launches Parental Controls on ChatGPT to Protect Teen Users
Click here to get the latest news updates from Tempo on Google News
McKinsey Highlights AI Opportunities to Boost the Mining Sector
12 jam lalu

AI is not just one technology, but a collection of tools such as machine learning, deep learning, generative AI, and genetic AI.
Google to Invest $15 Billion in India AI Center
16 jam lalu

The new data center in India is set to become the biggest AI hub the tech giant will have anywhere outside of the US.
Is AI Hype in Drug Development About to Turn into Reality?
2 hari lalu

A drug for pulmonary fibrosis created using artificial intelligence (AI) is showing early promise in patients.
Indonesia's BRIN Develops AI Speech Tools to Assist People with Disabilities
4 hari lalu

Indonesia's BRIN is developing AI-based speech and facial recognition to improve accessibility for people with disabilities.
How Dangerous Are AI-Built Viruses?
5 hari lalu

Opportunities for AI to create new life forms raise concerns about potential misuse.
OpenAI Unveils New Features as ChatGPT Reaches 800 Million Active Users
6 hari lalu

OpenAI successfully integrates third-party applications into ChatGPT, allowing users to prompt the AI to perform tasks within the integrated apps.
US: Medical Helicopter Crashes on California Highway
7 hari lalu

There were no patients on board the helicopter that crashed on a California highway. However, the pilot and two medical staff members were injured.
University of Indonesia to Open Artificial Intelligence Program Next Year
11 hari lalu

University of Indonesia (UI) will open an undergraduate program in artificial intelligence (AI) at the Faculty of Computer Science.
OpenAI to Block Users Generating Child Abuse, Exploitation Content
14 hari lalu

OpenAI will block users who generate harmful content with AI's assistance.
South Africa's Ambassador to France Found Dead in Paris
14 hari lalu

Ambassador Nkosinathi Emmanuel "Nathi" Mthethwa's body was discovered outside a Paris hotel. His death is being treated as a possible suicide.