Salesforce CEO Marc Benioff has renewed require artificial intelligence regulation, warning that some AI systems have passed a dangerous threshold by contributing to real-world harm. Speaking Tuesday at the World Economic Forum in Davos, Switzerland, Benioff stated numerous documented cases of suicide had been connected to interactions with AI models, prompting urgent inquires about accountability and oversight.
“This year, you certainly noticed something quite awful, that’s those AI models have become suicide coaches,” Benioff instructed CNBC’s Sarah Eisen. He claimed that the tempo of AI deployment has surpassed the safeguards required to safe unsecure users, mainly children and teenagers.
A Familiar Warning From Davos
Benioff’s comments echoed a stance he took at Davos in 2018, when he advised governments to regulate social media platforms as a public health problem. At the time, he compared social media to cigarettes, claiming that unchecked platform were addictive and harmful.
“Bad things were taking place everywhere in the world because social media turned into fully unregulated,” Benioff stated on Tuesday. “And now you’re kind of seeing that play out again with synthetic intelligence.”
His comments shows growing unease among technology leaders and policymakers as generative AI tool become more broadly reachable, often without clean guardrails around sensitive use cases which includes mental health.
Fragmented AI Regulation within the U.S.
In the US, AI regulation stays fragmented. Federal lawmakers have yet to set up comprehensive national standards, leaving states to fill the gap. California and New York have moved most aggressively, passing laws that impose protection, transparency, and child safety requirements on large AI developers.
California Governor Gavin Newsom signed a series of AI-associated bills in October targeted on child safety and platform accountability. In December, New York Governor Kathy Hochul signed the Responsible AI Safety and Education Act, which declares new disclosure and risk mitigation obligations for AI systems.
At the federal stage, President Donald Trump has taken a different method. In December, he signed an executive order opposing what he defined as “excessive State regulation,” claiming that U.S. AI corporations need to continue to be unencumbered to compete globally. “To win, United States AI corporations ought to be loose to innovate without cumbersome regulation,” the order stated.
Section 230 and AI Accountability
Benioff singled out Section 230 of the Communications Decency Act as a main impediment to accountability. The law shields technology corporations from legal responsibility for user-generated content, a safety that has long drawn bipartisan criticism.
“It’s funny, tech corporations, they hate regulation. They hate it, besides for one. They love Section 230,” Benioff stated. “So if this large language model coaches this child into suicide, they’re not responsible because of Section 230. That’s probably something that require to get reshaped, shifted, changed.”
Lawmakers from both parties have wondered whether or not Section 230 should continue to apply as structures evolve from passive hosts to active, algorithm-driven systems.
Human Cost Driving the Debate
For Benioff, the problem is no longer theoretical. “There’s numerous families that, unfortunately, have suffered this year, and I don’t think they had to,” he state. As AI systems become more to be extra self reliant and conversational, the debate over regulation is shifting from development versus control to safety versus harm.
Benioff’s comments add pressure on policymakers to deal with not simply economic competitiveness, but the human results of quickly deployed AI technologies.











