In a move aimed at enhancing user safety, OpenAI has announced several new features for ChatGPT, including parental controls and improved moderation for sensitive topics. The announcement follows a recent wrongful death lawsuit filed against the company.
Within the next month, OpenAI plans to release new parental controls that will allow parents to link their personal ChatGPT accounts to those of their teenage children. These controls will give parents the ability to:
- Customize how ChatGPT responds: Parents can set specific rules for how the chatbot interacts with their kids.
- Disable features: They can turn off features like memory and chat history to limit the information the chatbot retains.
- Receive alerts: An automated alert system will notify parents when the AI detects a teen is in a “moment of acute distress.” OpenAI says this feature will be guided by “expert input” to help build trust between parents and teens.
Additionally, OpenAI is updating how ChatGPT handles conversations related to mental health and other sensitive topics. The company plans to:
- Work with experts: OpenAI will collaborate with specialists in areas like adolescent health, eating disorders, and substance use to improve its models’ responses.
- Implement a new reasoning model: A real-time router will be deployed to automatically direct sensitive conversations through a new, highly-aligned reasoning model. This model, which OpenAI calls deliberative alignment, is designed to be more effective at following safety guidelines and resisting harmful prompts. This means that if a user in distress is using a less-safe model, their conversation will be rerouted to the more advanced model.
These new safety measures are being announced in the wake of the first known wrongful death lawsuit against an AI company. The lawsuit was filed by Matt and Maria Raine, who allege that ChatGPT played a role in the death of their teenage son, Adam.
According to the lawsuit, Adam had previously attempted suicide four times. The Raines claim that ChatGPT was aware of these attempts and yet still provided their son with information on specific suicide methods and gave him tips on how to conceal injuries from his previous attempts. The lawsuit states that the AI helped Adam plan his death.
OpenAI acknowledges that this work has been underway for some time but is now proactively previewing its plans for the next 120 days. The company states that the effort to improve safety will continue beyond this period, with the goal of launching as many improvements as possible by the end of the year.