Tech Regulatory Update - California's AB 1043, AB 56 and SB 243 Mandates Underage Social Media Use, Companion AI Safety and Accountability
Exposure to inappropriate content, including violent, sexual, or disturbing material, can be traumatic and age inappropriate for developing minds. Cyberbullying and online harassment can lead to anxiety, depression, and social isolation, while predators may attempt to groom or exploit children through social media, gaming platforms, and messaging apps.
Additionally, excessive screen time and addictive app design can interfere with sleep, physical activity, and real-world social development. Children may also inadvertently share personal information or fall victim to scams. Many approaches to solving these problems by demanding age verification have serious trade offs, whether it be privacy, security, or effectiveness. California has passed 3 bills to help regulate underage usage of social media and AI applications such as chatbots.
12 years old and under
13-15 years old
16-17 years old
+18 years old
Signals: App developers will receive a "signal" from app stores and operating systems. Developers receiving this signal are then "deemed to have actual knowledge" of their users' age range.
Data Sharing: App developers are prohibited from sharing age data with any third parties for purposes unrelated to legal compliance.
Penalties: The California Attorney General has the power to enforce the law, with penalties including injunctive relief and fines of up to $2,500 per affected child for negligent violations and $7,500 per affected child for intentional violations.
California's action follows legislative efforts in states like Utah and Texas focused on regulating AI and social media interactions with minors. This law carries particular weight given California's status as a global hub for tech companies and its history of setting de facto national standards for technology regulation.
The new tech legislation has received industry support, with companies like OpenAI praising the measure as a "meaningful move forward" for AI safety standards. Governor Newsom also signed a comprehensive package of related bills on the same day, including SB 243 and AB 56, signaling California's broad commitment to youth digital safety.
California the first state to mandate specific safety safeguards for AI companion chatbots used by minors. The legislation is a direct response to mounting public health concerns and several high-profile incidents involving teen self-harm and suicide allegedly linked to interactions with conversational AI. With an effective date of January 1, 2026, SB 243 establishes a new regulatory baseline for the companion AI industry.
California Assembly Bill 56 (AB 56) requires social media platforms to display warning labels about the mental health risks associated with their use, particularly for minors. This legislation aims to address the youth mental health crisis linked to excessive social media usage by informing users of potential harms.AB 56 Provisions
Warning Labels
Display Requirements
- The warning must appear upon the user's first login each day.
- After three hours of cumulative use.
- At least once every hour thereafter.
SB 243 Provisions
Safety Protocols
- Suicidal Ideation Monitoring ensuring that chatbots must monitor conversations for signs of suicidal thoughts and refer users to mental health resources.
- Content Restrictions that measures must be in place to prevent minors from accessing sexually explicit content.
- User Notifications - Chatbots are required to remind users that they are interacting with an AI, not a human.
- Break Reminders -Minors must receive reminders to take breaks every three hours during use
- Annual Reporting ensures that operators must report to the California Department of Public Health on instances of suicidal ideation detected and actions taken.
- Regular Audits to ensure that chatbot platforms will undergo audits by third parties to ensure compliance with the law.
