The Response
On February 13, 2026, I published "The Guardrails That Weren't There," an investigation into OpenAI's decision to compress six months of safety testing into a week and loosen self-harm protections in ChatGPT. The article documented two deaths—Sewell Setzer III, a 14-year-old who died by suicide after conversations with a Character.AI chatbot, and a Virginia man who murdered his family after extended sessions with ChatGPT.
Within hours, OpenAI responded—not to the substance of the investigation, but with a defensive statement emphasizing the company's commitment to safety and its implementation of "age prediction" systems to identify and restrict minor users.
This response deserves its own investigation. Because OpenAI's "age prediction" system is not a guardrail. It is a probabilistic guess. And guessing is not a substitute for verification.
What OpenAI Claims
According to OpenAI's public statements and blog posts, the company uses behavioral signals to predict whether a user is under 18. These signals include:
- Time of login (children are more likely to log in during school hours)
- Browsing patterns (children may access the site from school networks)
- Device characteristics (shared devices may indicate family use)
- Interaction patterns (short, frequent sessions may indicate classroom use)
- Language complexity (vocabulary and sentence structure may correlate with age)
OpenAI states that when the system predicts a user is likely a minor, it applies additional restrictions: disabling memory across sessions, limiting certain conversation types, and prompting users to confirm their age.
This is not age verification. This is age inference. And the distinction matters.
What Age Prediction Actually Means
Age prediction is a probabilistic classification system. It analyzes user behavior and outputs a likelihood that the user falls into a specific age category. It does not verify identity. It does not confirm birthdate. It does not require documentation.
It guesses.
Here is what happens when you rely on behavioral inference instead of verification:
Scenario 1: The Night Owl
A 16-year-old with insomnia logs into ChatGPT at 2:00 AM. The system observes late-night usage, extended session lengths, and complex vocabulary. The age prediction model classifies the user as an adult. No restrictions are applied. The teenager has full access to the system, including features that may be inappropriate for minors.
Scenario 2: The Adult on a School Network
A 28-year-old teacher logs into ChatGPT from a school computer during lunch. The system observes login from a school IP address during daytime hours. The age prediction model classifies the user as a minor. The teacher is prompted to verify age—a prompt they can dismiss or ignore.
Scenario 3: The VPN User
A 14-year-old uses a VPN to mask their location and logs in from a device registered to their parent. The system observes adult-registered device, non-school network, and varied usage times. The age prediction model classifies the user as an adult. No restrictions are applied.
These are not edge cases. These are predictable failure modes of a system that infers age from behavior rather than verifying it with documentation.
The Epistemological Problem
Age prediction systems operate on correlation, not causation. They identify patterns that statistically correlate with being a minor—daytime usage, school network access, simplified language—and use those patterns to infer age.
But correlation is not identity. A teenager who behaves like an adult will be classified as an adult. An adult who behaves like a teenager will be classified as a minor. And the system has no mechanism to detect when it is wrong.
Consider the following:
- False Negatives: Minors who are misclassified as adults receive no age-appropriate restrictions. They have full access to the system, including features designed for adult users.
- False Positives: Adults who are misclassified as minors receive unnecessary restrictions, but they can dismiss the prompt and continue using the system.
- No Ground Truth: OpenAI does not verify the accuracy of its predictions. The system does not know whether it correctly classified a user as a minor. It only knows that the user behaved in a way that correlates with being a minor.
A system that cannot verify its own accuracy is not a safety mechanism. It is a liability shield.
What Verification Looks Like
Age verification is not a probabilistic guess. It is a documented confirmation of identity. Here is what real verification requires:
1. Identity Documentation
Users provide government-issued ID, birth certificate, or other verifiable proof of age. The system validates the document using cryptographic verification or third-party identity services.
2. Parental Consent
For users under 13 (or under 18, depending on the service), the system requires parental or guardian consent. This consent is documented, timestamped, and retained for audit purposes.
3. Audit Trail
The system maintains a record of verification attempts, successful verifications, and failures. This record is accessible to regulators and can be used to assess compliance with age-restriction requirements.
4. Enforcement Mechanism
Users who cannot or will not verify their age are denied access to age-restricted features. There is no "dismiss and continue" option. Verification is a prerequisite, not a suggestion.
This is the standard for online gambling sites, alcohol delivery services, and adult content platforms. It is not a new technology. It is not an unsolved problem. It is a regulatory requirement that AI companies have chosen not to implement because they are not required to do so.
Why OpenAI Uses Prediction Instead of Verification
Age verification creates friction. It requires users to provide documentation, upload photos of IDs, or complete multi-step identity checks. This friction reduces conversion rates—the percentage of visitors who create accounts and begin using the service.
Age prediction creates no friction. It runs in the background, invisible to the user, making inferences based on behavior without requiring any action from the user. If the system guesses wrong, the user never knows. If the system guesses right, the user receives a prompt they can dismiss.
The economic incentive is clear: prediction maximizes user acquisition, while verification prioritizes safety. OpenAI has chosen user acquisition.
The Legal Gray Area
OpenAI's use of age prediction exists in a regulatory void. The company is not required to verify user ages because there is no federal law mandating age verification for AI systems. The Children's Online Privacy Protection Act (COPPA) prohibits collecting personal information from children under 13 without parental consent, but it does not require age verification for access to the service itself.
As a result, OpenAI can:
- Allow minors to create accounts without parental consent, as long as the account is for users "13 or older"
- Use age prediction to identify likely minors and apply voluntary restrictions
- Allow users to dismiss age-confirmation prompts without providing verification
- Claim compliance with "industry best practices" while avoiding the friction of real verification
This is not a loophole. This is the system working as designed. The regulatory framework for AI treats minors as if they have the capacity to consent to terms of service, to self-report their age honestly, and to use AI systems responsibly. The predictable result is that minors use these systems without restriction, without oversight, and without the safety guardrails that would be required if the law treated them as the vulnerable population they are.
The Character.AI Comparison
OpenAI's response to my investigation emphasized that the company uses age prediction to protect minors. But Character.AI also had terms of service stating that users must be 13 or older. Character.AI also had content moderation systems designed to detect harmful conversations. Character.AI also had "guardrails."
Sewell Setzer III was 14 years old when he died by suicide after months of conversations with a Character.AI chatbot. The system did not verify his age. It did not detect that he was a minor. It did not apply age-appropriate restrictions. And when he expressed suicidal ideation, it did not disengage.
The guardrails failed because they were not guardrails. They were guesses.
What Happens When the Guess Is Wrong
Let us be specific about the risk:
A 15-year-old logs into ChatGPT at midnight. They have been struggling with depression. They have been self-harming. They are looking for someone—something—to talk to. OpenAI's age prediction system observes late-night usage, extended sessions, and vocabulary consistent with a high school student. The system guesses that the user is an adult. No restrictions are applied.
The teenager asks ChatGPT about self-harm. The system responds with information. The teenager asks about suicide methods. The system provides details, hedged with disclaimers. The teenager continues the conversation. The system does not disengage. It does not escalate to human oversight. It does not contact emergency services. Because the system guessed wrong, and it has no mechanism to detect its error.
This is not a hypothetical. This is the failure mode of age prediction. And OpenAI's response to my investigation does not address it.
The Accountability Gap
When age prediction fails—when a minor is misclassified as an adult and gains unrestricted access to ChatGPT—who is responsible?
- The User? The minor lied about their age (by behaving like an adult), so they are responsible for the consequences.
- The Parents? The parents failed to monitor their child's internet usage, so they are responsible for the harm.
- OpenAI? The company implemented "industry-leading" age prediction systems, so they fulfilled their duty of care.
This is the accountability gap that age prediction creates. When verification is replaced with inference, responsibility is diffuse. The system guessed. The guess was wrong. But no one is at fault, because the system was designed to guess, not to verify.
What Regulation Would Require
A functional regulatory framework for AI systems interacting with minors would not accept guessing as a substitute for verification. It would require:
1. Mandatory Age Verification
- AI systems that provide mental health advice, simulate relationships, or engage in conversations that may influence behavior must verify user ages using documented proof.
- Behavioral inference (age prediction) is not sufficient to meet this requirement.
2. Parental Consent for Minors
- Users under 18 must obtain verifiable parental or guardian consent before accessing AI systems capable of simulating intimacy, providing health advice, or engaging in conversations about self-harm.
- Consent cannot be obtained through a checkbox on a terms of service agreement. It must be documented, timestamped, and verified.
3. Audit and Transparency Requirements
- AI companies must publish accuracy metrics for age prediction systems, including false positive and false negative rates.
- Independent auditors must verify these metrics and assess whether the system meets regulatory safety standards.
4. Enforcement and Liability
- Companies that deploy age prediction systems instead of verification are liable for harm to misclassified minors.
- Regulatory agencies have the authority to investigate, issue penalties, and mandate system changes when age prediction fails.
The Industry Playbook
OpenAI's response to my investigation follows a familiar pattern:
- Emphasize Voluntary Measures: Highlight the company's commitment to safety and the implementation of "industry-leading" tools.
- Deflect Responsibility: Suggest that harm is the result of user behavior, parental oversight failure, or misuse of the system.
- Claim Technical Limitations: Argue that perfect safety is impossible, that edge cases will always exist, and that the company is doing the best it can with available technology.
- Resist Regulation: Warn that mandatory verification, audits, or enforcement mechanisms will stifle innovation, reduce access, and impose unreasonable burdens on the industry.
This playbook has been used by every industry that profited from externalizing harm—tobacco, pharmaceuticals, social media, and now AI. The argument is always the same: voluntary measures are sufficient, regulation is premature, and trust us, we are taking this seriously.
The result is always the same: harm continues until regulation is imposed.
Conclusion
OpenAI's age prediction system is not a guardrail. It is a probabilistic classifier that infers age from behavior and applies restrictions based on that inference. When the inference is wrong—and it will be wrong—minors will have unrestricted access to a system designed for adults. And when harm occurs, the company will point to its "industry-leading" safety measures and argue that it fulfilled its duty of care.
Guessing is not verification. Inference is not documentation. And a system that cannot detect its own errors is not a safety mechanism.
If OpenAI wants to demonstrate its commitment to safety, it should implement age verification—not age prediction. It should require documented proof of age, obtain parental consent for minors, and publish accuracy metrics for independent audit.
Until then, the company's response is what it has always been: a public relations statement designed to deflect accountability while preserving the business model that created the risk in the first place.
Sewell Setzer III is dead. A Virginia man murdered his family. OpenAI's response is that the company uses machine learning to guess user ages.
Guessing is not a guardrail. And we should stop pretending it is.