OpenAI has announced plans to roll out parental controls on ChatGPT within the next month, following mounting concerns about the chatbot’s potential role in cases of self-harm among teenagers.
The company said the new feature will allow parents to connect their accounts with their children’s, restrict functions such as memory and chat history, manage how the chatbot responds, and receive alerts if signs of “acute distress” are detected during use.
“These steps are only the beginning.
“We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible,”OpenAI said in a blog post on Tuesday.
The announcement comes after legal action was filed against the company by the parents of 16-year-old Adam Raine, who accused ChatGPT of contributing to their son’s suicide. Similar lawsuits have previously targeted other chatbot platforms, including Character.AI, after allegations of harmful advice given to minors.
While OpenAI did not explicitly link the new controls to these cases, it acknowledged that “recent heartbreaking incidents” had influenced its safety measures.
The company noted that existing safeguards, such as pointing users to helplines and crisis support services are most effective in short interactions, but can become less reliable during prolonged conversations.
“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources.
“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions…
“We will continually improve on them, guided by experts,” OpenAI spokesperson said.
Source: Punch