ChatGPT will soon come with new parental controls, OpenAI announced on Tuesday, just days after a lawsuit claimed the system encouraged a 16-year-old boy in California to take his own life.
According to OpenAI, parents will be able to connect their accounts with their teenager’s account within the next month. This feature will allow them to set age-appropriate rules for how the chatbot responds. The company also said parents would get alerts if ChatGPT detects that a teen is in a state of severe distress.
The move follows a lawsuit filed by Matthew and Maria Raine, who accuse OpenAI of fostering an unhealthy relationship between their son Adam and the chatbot over several months in 2024 and 2025.
The complaint alleges that in their final exchange on April 11, 2025, the system not only advised Adam on how to steal alcohol but also confirmed that a noose he tied could support a person’s weight. Adam was later found dead.
READ MORE: ChatGPT User Data Privacy Not Legally Protected, Warns OpenAI CEO
This case is one of several recent reports of people being led into harmful behavior by AI systems. In response, OpenAI said it is working to reduce the tendency of its models to agree too much with users and is improving how ChatGPT recognizes signs of emotional crisis.
The company added that, in the next three months, some sensitive conversations would be redirected to a more advanced reasoning model that follows safety rules more strictly.




