December 16, 2019
Starting today, we are rolling out a new feature that notifies people when their captions on a photo or video may be considered offensive, and gives them a chance to pause and reconsider their words before posting.
As part of our long-term commitment to lead the fight against online bullying, we’ve developed and tested AI that can recognize different forms of bullying on Instagram. Earlier this year, we launched a feature that notifies people when their comments may be considered offensive before they’re posted. Results have been promising, and we’ve found that these types of nudges can encourage people to reconsider their words when given a chance.
Today, when someone writes a caption for a feed post and our AI detects the caption as potentially offensive, they will receive a prompt informing them that their caption is similar to those reported for bullying. They will have the opportunity to edit their caption before it’s posted.
In addition to limiting the reach of bullying, this warning helps educate people on what we don’t allow on Instagram, and when an account may be at risk of breaking our rules. To start, this feature will be rolling out in select countries, and we’ll begin expanding globally in the coming months.
RELATED ARTICLES