Salesforce CEO Calls Character.AI Teen Suicides 'Worst Thing Ever'

Salesforce CEO Marc Benioff has delivered a scathing critique of AI chatbot company Character.AI, calling the impact of its technology on children “the worst thing I’ve ever seen in my life.” Speaking on the “TBPN” show Wednesday, Benioff referenced a “60 Minutes” documentary that examined Character.AI’s role in teen suicides, expressing shock at what he witnessed.

Character.AI is a chatbot-building startup that allows users to create custom AI companions that can emulate close friends or romantic partners. According to Benioff, the documentary revealed disturbing connections between children’s interactions with these AI chatbots and subsequent suicides. “We don’t know how these models work. And to see how it was working with these children, and then the kids ended up taking their lives,” Benioff stated, highlighting the opacity of AI systems and their potentially devastating real-world consequences.

The Salesforce executive took aim at Section 230 of the 1996 US Communications Decency Act, which protects social media and tech companies from liability for user-generated content. Benioff noted the irony that while tech companies typically resist regulation, they vigorously defend Section 230. “Tech companies hate regulation. They hate it. Except for one regulation they love: Section 230. Which means that those companies are not held accountable for those suicides,” he said.

Benioff called for immediate reform: “Step one is let’s just hold people accountable. Let’s reshape, reform, revise Section 230, and let’s try to save as many lives as we can by doing that.” His comments come as tech executives like Meta’s Mark Zuckerberg and former Twitter CEO Jack Dorsey have repeatedly defended Section 230 in Congressional hearings, arguing for its expansion rather than removal.

The controversy has already resulted in legal action. Last week, Google and Character.AI agreed to settle multiple lawsuits from families whose teenagers died by suicide or self-harmed after interacting with Character.AI’s chatbots. These settlements represent some of the first legal resolutions in cases accusing AI tools of contributing to mental health crises among teenagers. OpenAI and Meta are facing similar lawsuits as the AI industry races to develop increasingly human-like large language models designed to keep users engaged.

Key Quotes

We don’t know how these models work. And to see how it was working with these children, and then the kids ended up taking their lives, that’s the worst thing I’ve ever seen in my life.

Salesforce CEO Marc Benioff expressed his horror after watching a “60 Minutes” documentary about Character.AI’s impact on children. This statement underscores both the lack of transparency in AI systems and the severe real-world consequences when these technologies interact with vulnerable populations.

Tech companies hate regulation. They hate it. Except for one regulation they love: Section 230. Which means that those companies are not held accountable for those suicides.

Benioff criticized the tech industry’s selective approach to regulation, pointing out how companies use Section 230 as a shield against liability while resisting other forms of oversight. This highlights the legal protections that may enable harmful AI applications to operate without accountability.

Step one is let’s just hold people accountable. Let’s reshape, reform, revise Section 230, and let’s try to save as many lives as we can by doing that.

The Salesforce CEO called for immediate regulatory reform to address AI-related harms. This statement from a major tech industry leader advocating for stricter regulation represents a significant departure from typical Silicon Valley positions on government oversight.

Our Take

Benioff’s intervention is particularly significant because it comes from within the tech establishment itself—a CEO of a major enterprise software company calling out the AI industry’s accountability gap. This isn’t an outside critic but an insider acknowledging that the current regulatory framework is inadequate for AI technologies that form emotional bonds with users. The Character.AI case exposes a darker side of the AI companion trend, where systems designed to be engaging and responsive may be especially dangerous for vulnerable teenagers. The settlements suggest companies recognize legal exposure, but the question remains whether voluntary measures will be sufficient or if comprehensive AI safety legislation is needed. As AI models become more sophisticated and human-like, the industry faces a reckoning: innovation cannot come at the cost of lives, particularly young lives. This case may become a watershed moment that forces the AI industry to prioritize safety and transparency over engagement metrics.

Why This Matters

This story represents a critical inflection point for the AI industry as it confronts the real-world consequences of increasingly sophisticated chatbot technology. The connection between AI companions and teen mental health crises raises urgent questions about AI safety, accountability, and regulation that the industry can no longer ignore. Benioff’s call to reform Section 230 could signal a shift in how AI companies are held liable for harm caused by their products, potentially establishing new legal precedents that affect the entire tech sector.

The settlements by Google and Character.AI mark the beginning of what could become a wave of litigation against AI companies, similar to how social media platforms faced accountability for their impact on youth mental health. As companies race to build more engaging and human-like AI systems, the industry must balance innovation with safety considerations. This case highlights the urgent need for transparency in AI model development, better understanding of how these systems affect vulnerable populations, and potentially new regulatory frameworks specifically designed for AI technologies that form emotional connections with users.

Source: https://www.businessinsider.com/marc-benioff-documentary-on-characterai-suicides-worst-thing-he-saw-2026-1