California Gov. Gavin Newsom signed multiple artificial intelligence safety bills and vetoed one of the more controversial ones Monday, as lawmakers’ attempts to protect children from AI met with strong opposition from the tech industry. One of the key bills signed, Senate Bill 243, requires chatbot operators to have procedures to prevent the production of suicide or self-harm content and put in guardrails, such as referring users to a suicide hotline or crisis text line. The bill is among several that Newsom signed Monday that would affect technology companies. Some of the other legislation he signed tackled issues such as age verification, social media warning labels and the spread of AI nonconsensual sexually explicit content.
Advertisement Under SB 243, operators would be required to notify minor users at least every three hours to take a break, and that the chatbot is not human. They would also be required to implement “reasonable measures” to prevent companion chatbots from generating sexually explicit content. “Emerging technology like chatbots and social media can inspire, educate, and connect — but without real guardrails, technology can also exploit, mislead, and endanger our kids,” Newsom said in a statement. The bill’s signing shows how Newsom is trying to balance child safety concerns and California’s leadership in artificial intelligence.
Advertisement “We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way,” Newsom said. Some tech industry groups such as TechNet still opposed SB 243, and child safety groups such as Common Sense Media and Tech Oversight California also withdrew their support for the bill because of “industry-friendly exemptions.” Changes to the bill limited who receives certain notifications and included exemptions for certain chatbots in video games and virtual assistants used in smart speakers. Tech lobbying group TechNet, whose members include OpenAI, Meta, Google and others, and other trade groups said the definition of a companion chatbot is too broad, according to an analysis of the legislation. The group also told lawmakers that allowing people to take legal action for violations of the new law would be an “overly punitive method of enforcement.”
Advertisement Newsom later announced that he had vetoed a more contentious AI safety bill, Assembly Bill 1064. That legislation would bar businesses and other entities from making companion chatbots available to California minors unless the chatbot isn’t “foreseeably capable” of harmful conduct such as encouraging a child to engage in self-harm, violence or disordered eating. In his veto message, Newsom said even though he agreed with the bill’s goal it might unintentionally result in the ban of AI tools used by minors. “We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether,” he wrote in the message. Child safety groups and California Atty. Gen. Rob Bonta had urged the governor to sign AB 1064. Common Sense Media, a nonprofit that sponsored AB 1064 and recommends that minors shouldn’t use AI companions, said the veto was “disappointing.”
Advertisement “It is genuinely sad that the big tech companies fought this legislation, which actually is in the best interest of their industry long-term,” said Common Sense Media founder Jim Steyer in a statement. Facebook’s parent company, Meta, opposes the legislation and the Computer and Communications Industry Assn. lobbied against the bill, saying it would threaten innovation and disadvantage California companies. California is the global leader in artificial intelligence, home to 32 of the 50 top AI companies worldwide. The popularity of the technology that can answer questions and quickly generate text, code, images and even music has skyrocketed in the last three years. As it advances, it is disrupting the way people consume information, work and learn.
Suicide prevention and crisis counseling resources If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line. More resources Lawmakers fear that chatbots could harm the mental health of young people as they lean on technology for companionship and advice. Parents have sued OpenAI, Character AI and Google, alleging that the companies’ chatbots harmed the mental health of their teens who died by suicide.
Advertisement Tech companies, including Character.AI and ChatGPT maker OpenAI, say they take child safety seriously and have been rolling out new features so that parents can monitor how much time their kids spend with chatbots. But parents also want lawmakers to act. One of the parents, Megan Garcia, testified in support of SB 243, urging lawmakers to do more to regulate AI after the death of her son Sewell Setzer III, who took his own life. The Florida mom sued chatbot platform Character.AI last year, alleging that the company failed to notify her or offer help to her son who expressed suicidal thoughts to virtual characters on the app. She praised the bill after the governor signed it into law. “American families, like mine, are in a battle for the online safety of our children,” Garcia said in a statement. (责任编辑:) |