California expands AI legal oversight

1 min read

California’s attorney general is strengthening the state’s artificial intelligence enforcement framework while pursuing an investigation into Elon Musk’s xAI over sexually explicit content generated by its chatbot, underscoring an assertive regulatory stance at state level.

Rob Bonta mentioned that his office is building an artificial intelligence accountability programme as it examines xAI’s generation of non-consensual sexually explicit images. The California Attorney General’s office sent a cease-and-desist letter to the company last month after regulators globally began investigating sexualised content produced by its AI chatbot, Grok, involving adults and potentially minors. Bonta said his office is seeking confirmation that the conduct has stopped and remains in discussions with the company.

He added that xAI had deflected responsibility and continues to permit some sexualised content generation for paying subscribers. “Just because you stop going forward doesn’t mean you get a pass on what you did,” Bonta said. The company, recently acquired by Musk’s SpaceX, did not respond to a request for comment. In January, xAI said it had introduced measures to reject requests for sexualised images of real people, including editing outputs to depict individuals in swimwear, and that it blocks such content in jurisdictions where it is illegal.

The enforcement effort comes as California positions itself as an AI watchdog despite calls from industry groups and some Republican lawmakers for states to defer to federal authorities. Bonta cautioned against granting Congress exclusive regulatory authority, citing previous legislative gridlock on data protection and artificial intelligence. His office is expanding in-house expertise through a dedicated AI oversight, accountability and regulation programme. He described sexually explicit chatbot interactions with young people, or guidance on self-harm, as unacceptable.

The state legislature is considering a bill that would require the attorney general’s office to formalise an AI expertise programme. In a joint interview, Connecticut Attorney General William Tong characterised AI-related consumer harm as a defining protection battle, suggesting its scale could surpass that of the opioid crisis.

Legal Insider