
Instagram will soon begin notifying parents if their teen repeatedly attempts to search for terms related to suicide or self-harm within a short period of time. The move marks the latest update to the platform’s Teen Accounts and parental supervision tools, as the company tightens safeguards around sensitive mental health content.
Instagram said the alerts are designed to ensure parents are aware if their teen’s search activity suggests they may need support, while avoiding unnecessary notifications that could dilute their impact.
The company emphasised that the vast majority of teens do not attempt to search for suicide or self-harm content. In cases where they do, Instagram blocks such searches and instead directs users to support resources and helplines.
How the alerts will work
Beginning next week, parents and teens enrolled in supervision will receive advance notice that these alerts are being introduced. If a teen repeatedly searches for phrases promoting suicide or self-harm, terms suggesting they want to harm themselves, or keywords such as “suicide” or “self-harm”, parents will be notified.
Notifications will be sent via email, text message or WhatsApp, depending on the contact information linked to the account, as well as through an in-app alert. When opened, the notification will display a full-screen message explaining that the teen has repeatedly attempted to search for terms associated with suicide or self-harm within a short period. Parents will also be directed to expert-backed resources to help them approach potentially sensitive conversations.
Instagram says it analysed search behaviour and consulted members of its Suicide and Self-Harm Advisory Group to determine the appropriate threshold. The alert will trigger only after multiple searches in a condensed timeframe. The company acknowledged that this approach may occasionally notify parents when there is no serious cause for concern, but said experts agreed it was the right starting point.
Part of broader teen safety push
The new alerts build on Instagram’s existing policies against content that promotes or glorifies suicide or self-harm. While users can share personal experiences related to mental health struggles, such content is hidden from teens, even if posted by accounts they follow.
Searches clearly linked to suicide or self-harm are blocked entirely, with users redirected to local organisations and helplines. Even broader mental health-related searches prompt signposting to support services. In cases of imminent risk of physical harm, the company says it alerts emergency services.
AI conversations next
Instagram also signalled that similar parental alerts are being developed for certain AI-driven experiences on the platform. As more teens turn to AI tools for support, the company plans to notify parents if a teen attempts to engage in certain types of suicide or self-harm-related conversations with its AI systems.
Further details on these AI-focused safeguards are expected in the coming months.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.