Instagram has announced that it will today begin to issue warning to users when they are about to post “potentially offensive” caption for a video or photo that’s being uploaded to the platform’s main feed.
When an Instagram user posts something the platform’s AI-fortified tools detect could be noxious, the app will generate a notification that will depict that the caption looks similar to others that have been reported. It will also give the user the option to edit such caption, however, if the user wants to post such unchanged, there will be an option to that effect too.
The company says that nudging people to reconsider posting potentially hurtful comments has had “promising” results in the company’s fight against online bullying.
This is the latest in a series of moves made by Instagram to tackle the scourge of bullying on its platform.
The company had in October launched a new “Restrict” feature that lets users shadow ban their bullies, and last year, it started using AI to filter offensive comments and proactively detect bullying in photos and captions.
Unlike its other moderation tools, the difference here is that Instagram is relying on users to spot when one of their comments crosses the line. It’s unlikely to stop the platform’s more determined bullies, but hopefully, it has a shot at protecting people from thoughtless insults.
Instagram says the new feature is rolling out in “select countries” for now, but it will expand globally in the coming months.