February 11, 2021
We want Instagram to be a place for people to connect with the people and things they love. But we also know that, just like in the offline world, there will always be those who abuse others. We’ve seen it most recently with racist online abuse targeted at footballers in the UK. We don’t want this behavior on Instagram.
The abuse we’re seeing is happening a lot in people’s Direct Messages (DMs), which is harder to address than comments on Instagram. Because DMs are for private conversations, we don’t use technology to proactively detect content like hate speech or bullying the same way we do in other places. But there are still more steps we can take to help prevent this type of behavior. So today we’re announcing some new measures, including removing the accounts of people who send abusive messages, and developing new controls to help reduce the abuse people see in their DMs.
Our rules against hate speech don’t tolerate attacks on people based on their protected characteristics, including race or religion. We strengthened these rules last year, banning more implicit forms of hate speech, like content depicting Blackface and common antisemitic tropes. We take action whenever we become aware of hate speech, and we’re continuously improving our detection tools so we can find it faster. Between July and September of last year, we took action on 6.5 million pieces of hate speech on Instagram, including in DMs, 95% of which we found before anyone reported it.
Today, we’re announcing that we’ll take tougher action when we become aware of people breaking our rules in DMs. Currently, when someone sends DMs that break our rules, we prohibit that person from sending any more messages for a set period of time. Now, if someone continues to send violating messages, we’ll disable their account. We’ll also disable new accounts created to get around our messaging restrictions, and will continue to disable accounts we find that are created purely to send abusive messages.
We’re also committed to cooperation with UK law enforcement authorities on hate speech and will respond to valid legal requests for information in these cases. As we do with all requests from law enforcement, we’ll push back if they’re too broad, inconsistent with human rights, or not legally valid.
When it comes to Instagram comments, we also have a number of tools that help people protect themselves. People can use comment filters to prevent others from leaving offensive comments that use words, phrases, or emojis they don’t want to see. Last year we announced a new feature to manage multiple unwanted comments in one go – whether that’s bulk deleting them, or bulk blocking the accounts that posted them. We also saw a meaningful decrease in offensive comments after we started using AI to warn people when they’re about to post something that might be hurtful.
Making sure people don’t see hateful or harassing content in DMs is more challenging, given they’re private conversations. Business and creator accounts – which tend to have high volumes of followers and receive the most abusive messages from people they don’t know – have the option to switch off DMs from people they don’t follow. We’ve also started rolling these controls out to personal accounts in many countries, and we hope to make them available to everyone soon. And people can also choose to turn off tags or mentions from anyone they don’t know or block anyone who sends them unwanted messages.
As recent conversations with our community have made all too clear, we recognize that seeing abusive DMs in the first place takes a toll. We’re currently working on a new feature designed to help with this very issue, which will incorporate feedback from our community. We hope to launch it in the coming months.
We’re committed to doing everything we can to fight hate and racism on our platform, but we also know these problems are bigger than us. We look forward to working with other companies, football associations, NGOs, governments, parents and educators, both on and offline.
RELATED ARTICLES