An important step towards better protecting our community in Europe

By  

Adam Mosseri, Head of Instagram

November 10, 2020

Update 4/14/22: Suicide and self-harm are complex issues and can have devastating consequences. We want to do all we can to support our community, which is why I’m pleased to share another important step forward in keeping people safe on our apps. Thanks to our ongoing discussions with the Department of Health and Social Care and the UK’s data protection regulator, our proactive detection technology is now sending potential suicide and self-harm content it finds in the UK to our review team – as it does everywhere else in the world, outside of the EU. These moderators can then review the content and take the appropriate action – whether that’s to remove it, direct the person posting to local support organizations or, if necessary, contact the emergency services.

Until now, we’ve been able to use this technology in the UK and the EU to identify potential suicide and self-harm content and automatically make it less visible, for example by preventing it from being recommended. Where the technology was extremely confident the content broke our rules, it could automatically remove it altogether. However, due to long-term discussions with governments and regulators in the UK and EU about how the technology works and the nuanced nature of suicide and self-harm content, we still couldn’t send this content to our review team to take action.

Now, combining this technology with our review team will have a meaningful impact in the UK. Not only will it help us remove more potentially harmful content from Facebook and Instagram, but it will also mean we can connect people sharing it with organizations like Samaritans, or the emergency services, when they may need it most.

We look forward to continuing our discussions with our lead EU regulator, and hope to be able to make similar progress for our community there soon.

Update 9/30/21: Information in this article may be outdated. For current information about our suicide and self-injury content detection technology, please visit our Safety Center. As described in the Safety Center, our algorithms are intended to help identify potential suicide and self-injury content and are not intended to diagnose or treat any mental health or other condition.

Update 9/2/21: We’re always looking for ways to better support our community, and we regularly meet with the members of our Suicide and Self Injury Advisory Group. For a while now, we’ve been consulting these experts on the right way to approach a specific type of content which doesn’t break our rules, but may depict or trivialize themes around suicide, death or depression. Experts agree it’s important we allow these kinds of posts, to make sure people can talk about how they’re feeling and friends and family have the chance to reach out, but that we need to balance this with protecting others from discovering potentially upsetting content. That’s why, rather than removing it completely, we’ll aim not to recommend this content in places like Explore, making it harder to discover. We hope this helps strike this delicate balance, and we’ll continue to consult with experts as research in this area develops.

We want to do everything we can to keep people safe on Instagram. We’ve worked with experts to better understand the deeply complex issues of mental health, suicide, and self-harm, and how best to support those who are vulnerable. No one at Instagram takes these issues lightly, including me. We’ve made progress over the past few years, and today we’re rolling out more technology in Europe to help with our efforts. But our work here is never done and we need to constantly look for ways to do more.

We recognize that these are deeply personal issues for the people who are affected. They are also complicated and always evolving, which is why we continue to update our policies and products so we can best support our community. We’ve never allowed anyone to promote or encourage suicide or self-harm on Instagram, and last year we updated our policies to remove all graphic suicide and self-harm content. We also extended our policies to disallow fictional depictions like drawings, memes, or other imagery that shows materials or methods associated with suicide or self-harm.

It’s not enough to address these difficult issues through policies and products alone. We also believe it’s important to provide help and support to the people who are struggling. We offer support to people who search for accounts or hashtags related to suicide and self-harm and direct them to local organizations that can help. We’ve also collaborated with Samaritans, the suicide prevention charity, on their industry guidelines, which are designed to help platforms like ours strike the important balance between tackling harmful content and providing sources of support to those who need it.

We use technology to help us proactively find and remove more harmful suicide and self-harm content. Our technology finds posts that may contain suicide or self-harm content and sends them to human reviewers to make the final decision and take the right action. Those actions include removing the content; connecting the poster to local organizations that can help; or, in the most severe cases, calling emergency services. Between April and June this year, over 90% of the suicide and self-harm content we took action on was found by our own technology before anyone reported it to us. But our goal is to get that number as close as we possibly can to 100%.

Until now, we’ve only been able to use this technology to find suicide and self-harm content outside the European Union, which made it harder for us to proactively find content and send people help. I’m pleased to share that, today in the EU, we’re rolling out some of this technology, which will work across both Facebook and Instagram. We can now look for posts that likely break our rules around suicide and self-harm and make them less visible by automatically removing them from places like Explore. And when our technology is really confident that a post breaks our rules, we can now automatically remove it altogether.

This is an important step that will protect more people in the EU. But we want to do a lot more. The next step is using our technology not just to find the content and make it less visible, but to send it to our human reviewers and get people help, like we do everywhere else in the world. Not having this piece in place in the EU makes it harder for us to connect people to local organizations and emergency services. For instance, in the US between August and October, more than 80% of the accounts we escalated to local organizations and emergency services were detected by our proactive technology. We’re in current discussions with regulators and governments about how best to bring this technology to the EU, while recognizing their privacy considerations. We think and hope we can find the right balance so that we can do more. These issues are too important not to push for more.

A timeline: steps we’ve taken to address self-harm and suicide content on Instagram

  • December 2016: Launched anonymous reporting for self-harm posts, and started connecting people to organizations that can provide help.
  • March 2017: Integrated suicide prevention tools into Facebook Live, making it easier for friends and family to report and reach out to people in real time.
  • November 2017: Rolled out technology outside the US (except in Europe) to help identify when someone might be expressing thoughts of suicide, including on Facebook Live. Started using AI to prioritize reports, to help us send people help and alert emergency services as quickly as possible.
  • September 2018: Created a Parent’s Guide for parents with teens who use Instagram.
  • February 2019: Began hosting regular consultations with safety and suicide prevention experts around the world to discuss the evolving complexity of suicide and self-harm, and to hear regular feedback on our approach.
  • February 2019: Expanded our policies to ban all graphic suicide and self-harm content, even if it would previously have been allowed as admission. We also made this content harder to find in search, blocked related hashtags and applied sensitivity screens to all admission content, sending resources to more people posting or searching for this type of content.
  • October 2019: Expanded our policies to ban fictional self-harm or suicide content including memes and illustrations, and content containing methods or materials.
  • September 2020: Collaborated with the Samaritans on the launch of their new guidelines on how to safely manage self-harm and suicide content online.
  • October 2020: Added a message at the top of all search results, when you search for terms related to suicide or self-injury. The message offers support and directs people to local organisations that can help.
  • November 2020: Rolled out technology in the EU to proactively find more harmful suicide and self-harm content and make it less visible.
  • August 2021: Following consultation with experts, rolled out technology to identify content that depicts or trivializes themes around suicide, death or depression, and make it harder to discover. We won’t remove this content, but we’ll try not to recommend it in places like Explore

The numbers

We believe our community should be able to hold us accountable for how well we enforce our policies and take action on harmful content. That’s why we publish regular Community Standards Enforcement Reports to share global data on how much violating content we’re taking action on, and what percentage of that content we’re finding ourselves, before it’s reported. This timeline outlines the progress we’ve made on tackling suicide and self-harm content on Instagram, as shown through these reports.