By Adam Mosseri, Head of Instagram
Posted on September 9, 2020
In the midst of a global pandemic, upcoming elections, and increasing racial tensions, we’re seeing a shift in the way people are using Instagram. More than ever, people are turning to the platform to raise awareness for the racial, civic, and social causes they care about. It’s a big part of why we committed in June to review the ways Instagram could be underserving certain groups of people. We have a responsibility to look at what we build and how we build, so that people’s experiences with our product better mirrors the actions and aspirations of our community.
Below is an update on areas where we’ve made progress this summer. This is by no means comprehensive, and we have a lot more to do, but I’m going to share regular updates so our community knows that this work is important and ongoing.
New Equity Team
To ensure this work is fully supported, we’ve created a dedicated product group – the Instagram Equity team – that will focus on better understanding and addressing bias in our product development and people’s experiences on Instagram. The Equity team will focus on creating fair and equitable products. This includes working with Facebook's Responsible AI team to ensure algorithmic fairness. In addition, they’ll create new features that respond to the needs of underserved communities. Seperate from this new product group we’re also hiring a new Director for Diversity and Inclusion for Instagram who will help to advance Instagram’s goal of finding, keeping, and growing more diverse talent.
Harassment and Hate
We’ve developed and updated a number of company-wide policies to support communities worldwide. We updated our policies to more specifically account for certain kinds of implicit hate speech, such as content depicting blackface, or stereotypes about Jewish people. We also strengthened enforcements against people who make serious rape threats, and we’ll now disable any account that makes these threats as soon as we become aware of them, rather than removing just the content. In addition, we’ll ensure involuntary public figures – people who may not have sought attention and who we’ve seen are often members of marginalized communities – are protected from harassment and bullying just as they were before finding themselves in the public eye.
We’ve continued to prioritize the removal of content that violates our policy against hate groups. This includes removing 23 different banned organizations, over half of which supported white supremacy. In addition, we recently announced updates to take action on organizations tied to violence, such as QAnon.
We’ve also made some changes for creators and businesses. For example, people with Business and Creator accounts can now manage who can send them direct messages. And we’ve begun expanding comment warnings to include comments in Live, so people will be asked to reconsider comments that might be offensive before they’re posted.
We spent the past two months reviewing Instagram’s verification practices and have started making changes to ensure a fairer process. An account must meet certain criteria before we verify it, including a degree of notability. We measure notability through press articles about the person applying for verification. We’ve now expanded our list of press sources we consider in the process to include more Black, LGBTQ+, and Latinx media.
While follower count was never a requirement to get verified through the in-app form (which anyone can apply for), we did have certain systems in place that prioritized accounts with high followings to help get through the tens of thousands of requests received every day. We’ve since removed this from the automated part of the process.
In response to ongoing concerns around perceived censorship on Instagram, we recently published the guidelines we use to determine the types of content that can appear in places like Explore. Our hope is that people will better understand why some types of content aren’t included in recommendations across Instagram and Facebook, and therefore may not be distributed as widely. We consulted over 50 leading experts specializing in recommendation systems, social computing, freedom of expression, safety, civil and digital rights in developing these guidelines.
As I said in my first post, this work will take time, but it’s important to do and to take the time to get right. If you’re interested in helping us do this work, check out some of the open roles we’re currently hiring for on the Policy team and Product team.