Twitter has announced a new experimental feature wherein it would allow users to revise a tweet if it contains any ‘language that could be harmful’. The microblogging website is currently testing the moderation tool on iOS.
When a user hits ‘send’ on a reply that includes harmful or offensive words, Twitter would prompt the user with a popup message asking them if they would want to revise the language used in the tweet before it gets published. The user can then either revise the language or post it anyway.
When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.— Twitter Support (@TwitterSupport) May 5, 2020
This feature is similar to what Facebook offers on Instagram wherein the app warns users about captions on a photo or video that may be considered offensive and gives them a chance to pause and reconsider their words before posting. The feature, called Caption Warning, is an extension to another available feature where users were notified before posting comments that may be considered offensive.
It is not known how Twitter will label a tweet as harmful language, but the company does list out guidelines in its hateful conduct policy and rules on public safety on its platform.
The moderation feature is currently limited to a few iOS users on an experimental basis. It is currently unknown if the feature will be expanded to Android users for further testing.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.