Instagram will begin notifying parents if their teenagers repeatedly search for suicide or self-harm-related terms on the platform, according to a new update published on its official blog.
The feature, set to roll out in the coming weeks, will apply to parents enrolled in Instagram’s parental supervision tools. The company said alerts will be triggered when a teen conducts multiple searches within a short period for terms associated with suicide or self-harm, including general phrases such as “suicide” or “self-harm,” as well as language that may indicate potential risk.
Advertisement
Notifications will be sent to parents via email, text message, WhatsApp, and in-app alerts, depending on the available contact details. When opened, the notification will display a full-screen message explaining that the teen has repeatedly searched for related terms. Parents will also be provided with expert-backed resources to help guide sensitive conversations.
In a blog post, Instagram said it developed the feature after analysing search behaviour and consulting with experts from its Suicide and Self-Harm Advisory Group. The company added that the alert threshold requires multiple searches in a short timeframe to avoid unnecessary notifications that could reduce the tool’s effectiveness.
The move comes amid growing legal and public scrutiny over teen well-being online. Meta Platforms, Instagram’s parent company, has faced several lawsuits in the United States alleging that it failed to protect children adequately and designed features that contribute to addiction and psychological harm.
Company executives, including Instagram head Adam Mosseri, have been questioned about safety measures and the balance between privacy and child protection.
In a separate case before the Los Angeles County Superior Court, internal Meta research presented in court suggested that parental supervision tools had a limited impact on compulsive social media use among children. The research also indicated that children experiencing stressful life events were more likely to struggle with regulating their usage.
Instagram said the alerts will begin launching next week in the United States, the United Kingdom, Australia, and Canada, with additional regions expected later in the year. The company also plans to expand the feature to notify parents if a teen attempts to engage the app’s artificial intelligence tools in conversations related to suicide or self-harm.
The latest update builds on earlier safety reforms introduced by Meta. In September 2024, the company rolled out Teen Accounts on Instagram, automatically placing users under 18 into private accounts with stricter controls. The changes limited who could message or tag teens, reduced exposure to sensitive content across Explore and Reels, introduced stronger anti-bullying filters, added daily time reminders, and activated overnight “sleep mode” to mute notifications.
Other social media companies are facing similar scrutiny. TikTok, owned by ByteDance, is confronting lawsuits from multiple U.S. states alleging that its algorithm is designed to maximise engagement among children for advertising revenue. YouTube has also faced criticism over how its recommendation systems affect young users.
Instagram said it will continue monitoring feedback and refining the new alert system to balance parental awareness with user privacy.