Video sharing platform TikTok has revised its community guidelines after it faced a storm of questioning on how it moderates content.
The firm had endured criticism for pulling down posts, especially those political in nature. TikTok also assured its users that the guidelines will provide more details and clarity.
The changes come after the firm was forced to fix serious security issues which could have allowed hackers to alter videos and other content.
TikTok in a blog post said it was “an inclusive platform built upon the foundation of creative expression”.
“Our community is diverse and global, and we aim to cultivate an environment for authentic interactions,” wrote authors Chinese-owned video-sharing platform TikTok has updated its community guidelines, after a series of questions about how it moderates content.
The firm has faced criticism for removing some posts, particularly those related to politics.
TikTok said the new guidelines provide more detail and clarity.
The changes come after it was forced to fix serious security issues which could have allowed hackers to alter videos and other content.
In a blogpost, TikTok said it was “an inclusive platform built upon the foundation of creative expression”.
Authors Lavanya Mahendran and Nasser Alsherif, from the firm’s global trust and safety team, wrote, “Our community is diverse and global, and we aim to cultivate an environment for authentic interactions.”
Changes include:
- clarifying that educational, historical, satirical, artistic content that can be clearly identified as such, or that raises awareness, will be allowed in the section on terrorist groups and other dangerous organisations
- the section on misinformation has been expanded to include manipulated content or fake news around elections
- violations are grouped into 10 distinct categories, each including an explanation of the rationale to clarify the type of misbehavior that would lead to posts being removed
Getting enough moderators to help deal with the amount of content uploaded on platforms has been a big problem for most social media platforms, especially dealing with political content, hate speech or fake news. Some though are turning to artificial intelligence for solutions.
TikTok has faced a stream of criticism about what content it allows and what it blocks.
In November, the app was engulfed in a fierce row about a US teenager who was blocked from the service after she posted a video criticising China’s treatment of the Uighur Muslims.
Following floods of headlines against the action, the ban was lifted with TikTok insisting human moderation error was to blame for the video being taken down. The 17-year-old’s prior conduct on the app led to her being blocked, it said, and it had nothing to do with Chinese politics.
Also in the US, the firm is facing increasing skepticism from Congress about developer ByteDance’s relationship to the Chinese government. Subsequently, the app has been banned from government-issued phones in the US army over security fears.
However, the company has come out to distance itself from the Chinese government but a report from the Washington Post which engaged six former TikTok employees revealed that moderators in China had the final say on whether flagged videos were approved.
Reports also made rounds that the firm had previously censored material that is politically sensitive to the Chinese government after it was given access to the site’s internal moderation guidelines.
The company had banned or restricted content relating to the Tiananmen Square protests, Tibetan independence and the religious group Falun Gong.
TikTok at the time had come out to claim the community standards the paper had seen were old ones that the firm no longer used.
A TikTok spokesman said that under the new guidelines, that there would be “no censorship” of such posts.