August 10, 2021
Today, we’re announcing a set of new features to help protect people from abuse on Instagram:
We have a responsibility to make sure everyone feels safe when they come to Instagram. We don’t allow hate speech or bullying on Instagram, and we remove it whenever we find it. We also want to protect people from having to experience this abuse in the first place, which is why we’re constantly listening to feedback from experts and our community, and developing new features to give people more control over their experience on Instagram, and help protect them from abuse.
To help protect people when they experience or anticipate a rush of abusive comments and DMs, we’re introducing Limits: a feature that’s easy to turn on, and will automatically hide comments and DM requests from people who don’t follow you, or who only recently followed you.
We developed this feature because we heard that creators and public figures sometimes experience sudden spikes of comments and DM requests from people they don’t know. In many cases this is an outpouring of support — like if they go viral after winning an Olympic medal. But sometimes it can also mean an influx of unwanted comments or messages. Now, if you’re going through that — or think you may be about to — you can turn on Limits and avoid it.
Our research shows that a lot of negativity towards public figures comes from people who don’t actually follow them, or who have only recently followed them, and who simply pile on in the moment. We saw this after the recent Euro 2020 final, which resulted in a significant - and unacceptable - spike in racist abuse towards players. Creators also tell us they don’t want to switch off comments and messages completely; they still want to hear from their community and build those relationships. Limits allows you to hear from your long-standing followers, while limiting contact from people who might only be coming to your account to target you.
Limits will be available to everyone on Instagram globally from today. Go to your privacy settings to turn it on, or off, whenever you want. We’re also exploring ways to detect when you may be experiencing a spike in comments and DMs, so we can prompt you to turn on Limits.
We already show a warning when someone tries to post a potentially offensive comment. And if they try to post potentially offensive comments multiple times, we show an even stronger warning - reminding them of our Community Guidelines and warning them that we may remove or hide their comment if they proceed. Now, rather than waiting for the second or third comment, we’ll show this stronger message the first time.
We’ve found these warnings really discourage people from posting something hurtful. For example, in the last week we showed warnings about a million times per day on average to people when they were making comments that were potentially offensive. Of these, about 50% of the time the comment was edited or deleted by the user based on these warnings.
To help protect people from abuse in their DM requests, we recently announced Hidden Words, which allows you to automatically filter offensive words, phrases and emojis into a Hidden Folder, that you never have to open if you don’t want to. It also filters DM requests that are likely to be spammy or low-quality. We launched this feature in a handful of countries earlier this year, and it will be available for everyone globally by the end of this month. We’ll continue to encourage accounts with large followings to use it, with messages both in their DM inbox and at the front of their Stories tray.
We’ve expanded the list of potentially offensive words, hashtags and emojis that we automatically filter out of comments, and will continue updating it frequently. We recently added a new opt-in option to “Hide More Comments” that may be potentially harmful, even if they may not break our rules.
We hope these new features will better protect people from seeing abusive content, whether it’s racist, sexist, homophobic or any other type of abuse. We know there’s more to do, including improving our systems to find and remove abusive content more quickly, and holding those who post it accountable. We also know that, while we’re committed to doing everything we can to fight hate on our platform, these problems are bigger than us. We will continue to invest in organisations focused on racial justice and equity, and look forward to further partnership with industry, governments and NGOs to educate and help root out hate. This work remains unfinished, and we’ll continue to share updates on our progress.