According to a blog post on Twitter’s site, the algorithm favours right-leaning content over left-leaning information, although the reasons for this are still a mystery. The data came from a Twitter internal research that looked at how the service’s algorithm amplifies political posts.
Twitter analysed tens of millions of tweets between April 1 and August 15 of 2020 as part of the study’s findings. Twitter users in Canada, France, Germany, Japan, Spain, the United Kingdom and the United States posted the following tweets. With the exception of Germany, Twitter discovered that right-leaning accounts “get more algorithmic amplification than the political left.” Researchers discovered that conservative news sites’ content benefits from the same kind of bias as well.
Because it’s “a product of people’s interactions with the platform,” Twitter says it doesn’t know why data suggests its algorithm favours right-leaning content over left-leaning information. The problem may not be with Twitter’s algorithm in particular, as Steve Rathje, a Ph.D. candidate who studies social media, has released the results of his research explaining how inflammatory content concerning political outgroups is more likely to become viral.
As a result of their research, Rathje and her colleagues looked into what kind of information gets the most attention on social media and discovered a clear pattern: negative posts regarding political outgroups get considerably greater attention on Facebook and Twitter. “In other words, if a Democrat is critical of a Republican (or vice versa), this type of content is more likely to be shared.”
Taken together, Rathje’s findings suggest that right-leaning tweets are more effective at igniting indignation on Twitter than left-leaning ones. Possibly, Twitter’s algorithmic problem is more closely linked to encouraging harmful tweeting than it is to a specific political viewpoint. And, as previously indicated, according to Twitter’s research, Germany was the only country where the right-leaning algorithm bias was not present. Perhaps it has to do with Germany’s commitment to have hate speech removed within 24 hours from platforms like Facebook, Twitter, and Google. It’s been reported that some Twitter users have changed their country to Germany in order to avoid being exposed to Nazi imagery.
For some time now, Twitter has been attempting to alter the way we Tweet on their platform. Last year, Twitter began testing a tool that alerts users when they’re about to post a harsh reply. These are indications of how much Twitter is already aware of issues relating to bullying and hate speech on the site.
In her whistleblower report, Frances Haugen says that Facebook’s algorithm encourages hate speech and divisive content. She obtained a lot of internal documents from the company and published them on her website. Twitter is in a similar situation, but it’s disclosing some of its own data examinations before it leaks.
Using another study as an example, Rathje noted that moral anger boosted viral posts from both liberal and conservative opinions, but that conservative viewpoints were more successful. “Further research should be done to examine if these features help explain the amplification of right-wing content on Twitter,” he argues. With more research and access to additional experts, the platform may be able to better understand the issue’s underlying content.
1 Comment
Pingback: Twitter sets up crypto team to explore decentralised apps - Innovation Village | Technology, Product Reviews, Business