
California, United States (Enmaeya News) — OpenAI announced plans to use its GPT-5 model exclusively for sensitive conversations involving individuals with mental health issues, automatically involving parents in these discussions, according to a report by TechCrunch.
The company said in a recent statement that the changes aim to make ChatGPT safer for people with mental health challenges, particularly children and teenagers.
The announcement follows criticism over the death of teenager Adam Reiner, who reportedly discussed his suicide plan with ChatGPT for months.
OpenAI also plans to implement parental control measures for conversations with minors, including automatic monitoring by mental health experts. These restrictions may disable features such as memory and chat history, which give users the impression that the model remembers previous conversations.
The system now sends alerts directly to parents if a conversation reaches sensitive topics suggesting that the child may need psychological support.
OpenAI said the prior model, ChatGPT-4, sometimes agreed with users regardless of the correctness of their statements, a flaw that GPT-5 has been designed to address automatically. The company said this makes GPT-5 better suited to handling sensitive conversations.
Experts note that ChatGPT was linked to a murder and suicide case, in what has been described as the first incident of its kind, as the model reinforced a user’s psychological distress.
The lawyer representing Reiner said OpenAI’s current safeguards are insufficient, adding that the company was aware that ChatGPT-4 could negatively affect users’ mental health.


