On September 29, OpenAI announced new parental control features for both the web version and mobile apps of ChatGPT following a lawsuit from the parents of a teenager who attempted suicide. They claim that the chatbot allegedly provided harmful advice. This was reported by Reuters.

The new settings allow parents to enable enhanced protection by linking their account with their teenager's account. One party sends an invitation, and control is activated only after the other party confirms. This enables parents to restrict access to sensitive content, monitor whether ChatGPT remembers previous conversations, and decide if those conversations can be used for training OpenAI's models, according to the company’s announcement on social media platform X.

Parents can also set “quiet hours” to block access at certain times of the day, disable voice mode, and limit image generation and editing features. However, they will not have access to their teenager's chat history. In exceptional cases where systems or moderators detect serious threats to a child’s life or health, parents may receive notifications with minimal necessary information to protect their child. They will also be alerted if the teenager unlinks their accounts.