Now ChatGPT Will Help Save Your Life! New Feature to Send Alerts to Your Trusted Contact
ChatGPT New Feature: “Trusted Contact” is an optional safety feature available to users aged 18 and older.
ChatGPT New Feature: The artificial intelligence chatbot, ChatGPT, has been the subject of considerable discussion recently—particularly following allegations that the AI was failing to adequately handle sensitive conversations involving suicide and self-harm. In the wake of this controversy, OpenAI has introduced a new safety feature for ChatGPT called “Trusted Contact.” This feature is specifically designed to assist individuals who are experiencing situations involving mental distress, depression, or emotional crisis.
What is the “Trusted Contact” Feature?
“Trusted Contact” is an optional safety feature available to users aged 18 and older. It allows users to designate a trusted individual—such as a family member, close friend, or caregiver—as their contact. If, during a conversation, ChatGPT detects that the user is grappling with serious thoughts of self-harm or suicide, this feature can send an alert to that designated trusted contact. According to OpenAI, the objective is to prevent individuals from feeling isolated during times of crisis and to encourage them to connect with a real human being.
How Does This System Work?
The “Trusted Contact” feature operates in several stages. First, a user can navigate to ChatGPT’s settings to add a trusted individual. However, the feature becomes active only after that individual accepts the invitation. If the AI system detects signs of serious risk within the conversation, ChatGPT will first encourage the user themselves to reach out to their Trusted Contact.
To facilitate this, the system may display various “conversation starters” to help the user initiate the dialogue easily. Subsequently, a specially trained human review team will assess the situation. If they determine that the risk is severe, an alert may be sent to the Trusted Contact via email, text message, or in-app notification.
Will Your Chats Remain Private?
OpenAI has clarified that the alerts sent to a Trusted Contact will not include the full details or content of your private chats or conversations. The notification will simply indicate that the user may have engaged in concerning conversations related to self-harm and will advise them to reach out to that designated individual. In other words, the company asserts that this feature has been designed with the user’s privacy firmly in mind.
AI Will Not Replace Professional Help
The company has also clarified that the “Trusted Contact” feature is not a substitute for mental health professionals or emergency services. When necessary, ChatGPT will continue to recommend contacting helpline numbers, seeking crisis support, and obtaining professional assistance, just as it has in the past. OpenAI states that this feature was developed in consultation with mental health experts, medical professionals, suicide prevention organizations, and the American Psychological Association.
Human Support Is Essential Alongside Technology
Today, people have begun sharing their personal struggles with AI chatbots. In this context, features like “Trusted Contact” demonstrate that tech companies are becoming increasingly serious about mental health safety. However, experts maintain that while AI can certainly offer assistance, no machine can ever fully replace genuine human companionship and professional support during difficult times.

