Instagram Update: Parents will now receive an alert when they search for dangerous words. Learn how this safety feature works..
Amidst growing pressure regarding the safety of teens on social media, Instagram has announced the launch of a new safety feature. Now, if a teen repeatedly searches for terms related to suicide or self-harm, an alert will be sent to the child's parents. Learn more about this special feature...
Social Network
Instagram has announced that if a teen repeatedly searches for terms related to suicide or self-harm in a short period of time, the system will flag it. Parents will then be notified via email, text message, WhatsApp, or Instagram. However, this feature will only be available to families enrolled in the parental supervision tool and where the teen has given consent. According to the company, the alerts aim to provide timely information to parents and connect them with appropriate resources so they can support their child.
Which countries will the rollout begin?
This feature will be rolled out next week in the US, UK, Australia, and Canada. It may be rolled out to other countries, including India, later.
A good start or a decision made under pressure?
Meta says this is a preliminary step, and the company is still determining the appropriate threshold for triggering alerts. In some cases, alerts may not indicate a real threat, so the company is working to improve the system based on user feedback. In the future, Meta is considering implementing similar alert systems in its AI experiences, such as AI chatbots, so parents can be notified if a teen initiates a dangerous conversation.
How will Instagram's new alert system work?
Because Meta prioritizes children's safety, it has created a trigger system that tracks teen behavior:
Search alerts: If a teen repeatedly searches for dangerous phrases like suicide or self-harm in a short period of time, the system will be activated.
Notification methods: Parents will receive this alert via email, text message, WhatsApp, or Instagram notifications themselves.
Parental Supervision: This feature will be available to parents who have registered with parental supervision tools. However, this will require the teen's consent.
AI Chatbots Will Also Be Under Close Monitoring
Not just for search, Meta plans to issue similar alerts for its AI chatbots in the future. If a teen attempts to talk to the AI about a dangerous topic, parents will be immediately notified.
Why Was This Step Taken?
This new feature comes at a time when Meta and its CEO, Mark Zuckerberg, are facing serious legal challenges:
Mental Health Trial: A case is ongoing in Los Angeles, USA, where Meta is accused of harming the mental health of young people and creating addiction to the app.
Big Tobacco Moment: Experts are calling this period of social media companies a Big Tobacco Moment, where they are being accused of misleading the public and ignoring the safety of teens.
PC Social Media

