TikTok has become one of the most attractive platforms for content creators and other categories of users. However, this platform has many problems; the biggest of all are violent videos.
The Short Video sharing platform is already facing various investigations in this regard, including temporary bans in some areas due to its content. Last year, it was investigated in Italy after a 10-year-old girl died while trying to reproduce the viral trend of the app.
Hence, it’s looking to use automation to detect and remove a lot of videos that violate its guidelines. Over the past year, the service has been testing and setting up systems to find and remove such content. It will roll out the system in the US and Canada over the next few weeks.
“Over the next few weeks, we’ll begin using technology to automatically remove some types of violative content identified upon upload, in addition to removals confirmed by our Safety team,” said Eric Han, Head of US Safety, TikTok in a news update.
First, the algorithm looks for posts that violate policies regarding youth safety, violence, graphic content, nudity, sex, illegal activities, and regulated goods. If the system detects a violation, they immediately rip the video and the user who posted it can appeal. Users can still signal videos for review manually.
Automated reviews will be “reserved for content categories where our technology has the highest degree of accuracy,” TikTok noted. Only one in 20 videos that are automatically deleted are fake and, according to the company, should remain on the platform. Hoping to improve the algorithm’s accuracy, TikTok notes that “requests to appeal a video’s removal have remained consistent”.
According to TikTok, automation should free up its safety staff to focus on content that requires a more nuanced approach, including videos of bullying, harassment, misinformation and hate speech.
Most importantly, the system can reduce the number of potentially alarming videos that safety teams have to watch, such as those containing extreme violence or exploitation of children. On the one hand, Facebook is accused of not doing enough to protect the welfare and mental health of content moderators whose job it is to review content that is often disturbing.
TikTok is also changing how its users are notified when they violate a rule. The platform now tracks the number, severity and frequency of violations. Users can see details about them in the “Inbox Updates” section of their inbox. They can also see information about the consequences of their actions, such as: For example, how long they were banned from posting or interacting with other people’s content.
1 Comment
Pingback: TikTok is responsible for the widespread of misinformation ahead of the Kenya general election in August 9, 2022; report says - Innovation Village | Technology, Product Reviews, Business