Twitter test asks you to ‘rethink’ sending mean replies

Karissa Bell
Senior Editor
Image of a blue bird with thought bubbles on a yellow background. The bird is near the center of the image. It has an orange beak and orange legs and feet. He is looking towards a thought bubble above the upper right portion of his head. This thought bubble has a "#" symbol in it. There is another thought bubble on the left side of his head that has an "@" symbol in it. Both of the bubbles are blue and the symbols inside are white. There is a shadow underneath the bird.

Twitter is testing a new feature that could reduce bullying and harassment on its platform. Under the experiment, the company will ask users if they want to “rethink” replies before they hit send on potentially offensive tweets.

“To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful,” the company wrote.

The feature is similar to an anti-bullying feature Instagram introduced last year. The photo sharing app uses AI to detect language that might be offensive and asks users to rethink the comment before they hit send. Twitter’s version will look for words “similar to those in posts that have been reported,” according to Reuters

Twitter has long struggled with harassment, and for years has experimented with a number of features in order to improve “conversational health.” The latest experiment is launching now to a subset of Twitter’s iOS users in the United States.