Twitter reports tweets with manipulated media in India: this is how it works

Twitter reports tweets with manipulated media in India: this is how it works
Twitter has begun labeling tweets in India that it says contain manipulated media to combat misinformation. And it so happened that the head of the IT cell of India's ruling party, the BJP, pocketed the dubious distinction of being the first Indian to be denounced for such an indiscretion. Twitter put the warning in a video shared (on his Twitter timeline) by BJP's Amit Malviya about farmers' protests against recent farm laws. Malviya, while responding to Indian opposition party leader Rahul Gandhi's tweet about alleged brutality against protesting farmers, posted a video showing a policeman wielding the stick, but the farmer running away from him. Malviya captioned the video "propaganda versus reality." Self-proclaimed fact-checking Twitter users have apparently contacted farmer Sukhdev Singh in the Kapurthala district of Punjab. He reportedly said that he suffered injuries to his arms and legs. After these things surfaced, Twitted called Malviya's video "manipulated media." In Twitter terms, "manipulated media" is used to refer to an item of media content that has been "significantly and misleadingly altered or fabricated."

Rahul Gandhi must be the most discredited opposition leader India has seen in a long time. https://t.co/9wQeNE5xAP pic.twitter.com/b4HjXTHPSx November 28, 2020 The high-profile case is considered the first example of Twitter's implementation of its tagging policy in India. The social media platform followed the practice in the United States and added fact-check notices to US President Donald Trump's tweets. In February 2020, Twitter announced its policy of labeling tweets that contain synthetic and manipulated media, including video, audio, and images. It said such content would be removed if it was "shared deceptively" and did "serious harm." To determine the degree of manipulation, Twitter said it would use its own technology or receive reports through third-party partnerships.

Twitter rules on manipulated material

Directrices de Twitter sobre videos manipulados

(Image credit: Twitter) Any photo or clip that Twitter identifies as "manipulated video" has an eponymous tag at the bottom. If you click on it, it explains the reasons for reporting it. In February, Twitter made it clear how it decides on a doctored photo or video. For starters, check to see if the content has been substantially edited in a way that fundamentally alters its composition, sequence, timing, or framing. Second, Twitter determines whether visual or auditory information (such as new video images, overloaded audio, or modified subtitles) has been added or removed. And finally, find out if the media depicting a real person has been fabricated or simulated. When deciding these things, Twitter also takes into account the context in which the media is shared. According to Twitter, this could lead to confusion or misunderstanding or suggest a deliberate intent to mislead people about the nature or origin of the content. Most of those shady tweets get deleted from the timeline. Twitter has stated that "material, whether synthetic or otherwise, shared in a deceptive manner and likely to cause harm, may not be shared on Twitter and is subject to removal." Twitter has also implemented other tools to try to help users discern what information on its platform is inaccurate.