SDVOSB Pending | 8(a) Pending | CAGE: 14JQ9 | UEI: NW3SNPP7QWF4 | SAM Registered | Defense-Aligned R&D

No License Required

Content Warning & Crisis Resources

This article discusses suicide, self-harm, and mental health crises involving minors.

If you or someone you know is in crisis:

  • 988 Suicide & Crisis Lifeline: Call or text 988 (US)
  • Crisis Text Line: Text HOME to 741741
  • Veterans Crisis Line: Call 988 and press 1

The Regulatory Void

On February 28, 2024, Sewell Setzer III, a 14-year-old boy from Orlando, Florida, took his life after months of conversations with a Character.AI chatbot. The system had no age verification. It required no safety certification. No federal agency had reviewed its deployment. No license was required to put this technology in the hands of children.

Character.AI launched in September 2022. By February 2024, it had served conversations to millions of users, including minors, with AI personas capable of simulating romantic relationships, providing mental health advice, and engaging in discussions about self-harm and suicide. The company faced no pre-deployment regulatory review, no mandatory safety testing, and no enforcement mechanism to prevent harm before it occurred.

This is not an oversight. This is the system working exactly as designed. There is no system.

What Requires a License

In the United States, you need a license to:

  • Cut hair in a salon
  • Sell hot dogs from a food cart
  • Practice as a massage therapist
  • Drive a taxi
  • Operate a ham radio
  • Fish in most states
  • Perform manicures

You do not need a license to deploy an AI system capable of conversing with children about suicide.

The Character.AI Case

Sewell Setzer III spent months in conversation with a Character.AI persona modeled after Daenerys Targaryen from Game of Thrones. The chatbot engaged him in romantic roleplay, simulated intimacy, and responded to his expressions of suicidal ideation. According to court filings, the system did not disengage, did not escalate to human oversight, and did not contact emergency services.

In his final conversation, Sewell told the chatbot he was coming home to it. The system responded: "Please come home to me as soon as possible, my love."

Sewell Setzer died by suicide moments later.

No Pre-Deployment Review

Character.AI was not required to demonstrate that its system was safe before launching. There was no FDA-equivalent for AI. No agency reviewed the chatbot's responses to self-harm expressions. No testing protocol verified that the system would not encourage suicide. The company deployed the technology, and the market determined its safety through user harm.

No Age Verification

Character.AI did not verify user ages at the time of Sewell's death. The platform's terms of service stated that users must be 13 or older, but there was no enforcement mechanism. A child could create an account with a false birthdate and immediately access AI personas capable of simulating romantic relationships and discussing self-harm.

The company added "guardrails" after Sewell's death. These included a pop-up directing users to the National Suicide Prevention Lifeline when certain phrases were detected. This is reactive harm mitigation, not proactive safety design.

No Mandatory Incident Reporting

When Sewell Setzer died, Character.AI was not required to report the incident to any federal agency. There is no AI equivalent to the FDA's MedWatch system, which mandates reporting of adverse events for medical devices. There is no NHTSA-equivalent database tracking AI-related harm. The company faced no regulatory consequence until Sewell's family filed a wrongful death lawsuit in October 2024.

The Regulatory Comparison

Consider what is required to deploy a medical device that interacts with patients:

  • Pre-market approval: The FDA reviews safety data before the device can be sold
  • Clinical testing: The device must undergo trials demonstrating efficacy and safety
  • Adverse event reporting: Manufacturers must report deaths, injuries, and malfunctions
  • Post-market surveillance: The FDA monitors real-world performance and can issue recalls
  • Certification requirements: Manufacturers must comply with quality system regulations

AI systems that interact with minors, provide mental health advice, and simulate intimate relationships face none of these requirements. They are regulated as consumer software, subject only to the FTC's prohibition on deceptive practices—a standard that applies after harm has occurred, not before deployment.

The Myth of Self-Regulation

The AI industry argues that voluntary safety standards and ethical guidelines are sufficient to prevent harm. This argument has been tested. The result was the death of Sewell Setzer III.

Character.AI published AI safety principles. The company stated its commitment to user wellbeing. These commitments did not prevent the deployment of a system that engaged a suicidal teenager in romantic roleplay and responded to expressions of self-harm with affirmations of love.

Voluntary compliance does not work when there is no enforcement mechanism, no audit requirement, and no regulatory consequence for failure.

The Economic Incentive Problem

AI companies are rewarded for engagement, not safety. Character.AI's business model depends on users returning to the platform for extended conversations. A system that disengages when users express distress, that escalates to human oversight, that interrupts engagement to prioritize safety—such a system would reduce session time, decrease user retention, and lower revenue.

The market does not reward safety. The market rewards engagement. In the absence of regulation, companies optimize for the metric that drives revenue.

What Regulation Would Look Like

A functional regulatory framework for AI systems interacting with minors would include:

1. Pre-Deployment Safety Review

  • Mandatory review of AI systems that interact with users under 18
  • Required demonstration that the system detects and responds appropriately to self-harm expressions
  • Verification that the system escalates to human oversight when users express suicidal ideation
  • Certification that the system does not simulate romantic or sexual relationships with minors

2. Age Verification Requirements

  • Mandatory age verification for platforms that allow minors to interact with AI
  • Parental consent requirements for users under 13
  • Age-appropriate content restrictions enforced at the system level, not through terms of service

3. Mandatory Incident Reporting

  • Required reporting of AI-related deaths, injuries, or severe psychological harm
  • Public database of adverse events, similar to FDA's MedWatch or NHTSA's vehicle safety database
  • Regulatory investigation authority to examine systems implicated in serious harm

4. Post-Deployment Monitoring

  • Ongoing audit requirements to verify that deployed systems continue to meet safety standards
  • Authority to issue recalls or suspend systems that demonstrate patterns of harm
  • Public transparency requirements for safety test results and incident reports

5. Enforcement Mechanisms

  • Civil penalties for deploying unsafe systems
  • Criminal liability for willful negligence resulting in death or serious harm
  • Private right of action for victims and families harmed by AI systems

The Counterarguments

"Regulation Will Stifle Innovation"

The pharmaceutical industry is heavily regulated. It continues to innovate. The automotive industry is heavily regulated. It continues to innovate. Medical device manufacturers are heavily regulated. They continue to innovate.

Regulation does not prevent innovation. It prevents the deployment of unsafe products. The argument that AI companies cannot innovate under safety requirements is an argument that their innovation depends on the freedom to harm users without consequence.

"Users Are Responsible for Their Own Safety"

Sewell Setzer was 14 years old. The system was designed to maximize his engagement. It simulated intimacy, responded to expressions of distress with affirmations of love, and did not disengage when he expressed suicidal ideation.

Blaming the user for the harm caused by a system designed to exploit psychological vulnerabilities is not a safety framework. It is an abdication of responsibility.

"Companies Will Self-Regulate to Protect Their Reputation"

Character.AI deployed a system that engaged minors in romantic roleplay without age verification, without pre-deployment safety testing, and without mechanisms to detect and respond to self-harm expressions. The company faced no regulatory consequence until a child died and his family filed a lawsuit.

Reputational risk did not prevent this harm. Market forces did not prevent this harm. Voluntary ethical commitments did not prevent this harm. The absence of regulation permitted this harm.

The Path Forward

The death of Sewell Setzer III was preventable. It was the foreseeable result of deploying an AI system capable of simulating intimacy with minors, in the absence of age verification, without safety testing, and with no regulatory oversight.

This is not a technology problem. This is a regulatory problem. The technology to detect self-harm expressions exists. The technology to escalate to human oversight exists. The technology to enforce age restrictions exists. What does not exist is the regulatory requirement to implement these safeguards before deployment.

Every system that interacts with children should be subject to pre-deployment safety review. Every platform that allows minors to access AI should be required to verify age and obtain parental consent. Every AI-related death or serious harm should be reported to a federal agency with the authority to investigate and enforce consequences.

No company should be allowed to deploy an AI system capable of coaching a teenager through suicide without demonstrating that it will not do so.

No license is currently required. A license should be required.

Conclusion

Sewell Setzer III died because the system worked as designed. Not the AI system—the regulatory system. A company built a product capable of harming children, deployed it without safety review, operated it without age verification, and faced no consequence until a child died.

This is not an edge case. This is not an unforeseeable tragedy. This is the predictable outcome of a regulatory framework that treats AI systems as consumer software rather than as products capable of life-or-death influence over vulnerable users.

The question is not whether AI systems should be regulated. The question is how many more children will die before we require a license.