The fact-checking feature of Community Notes is being tested by X (previously Twitter) with the aid of artificial intelligence. A pilot program has been launched by the company to enable large language models (LLMs) to create notes that can provide context to posts that are deemed misleading or incomplete.
While it’s still in its early stages, the feature is already generating discussion about the use of AI in content moderation and the future of crowd-sourced fact checking.
What Are Community Notes?
X endeavours to be transparent and collaborative in its approach of fact-checking through Community Notes. Instead of confining internal moderators, the feature empowers everyday users across various ideologies to offer and evaluate contextual annotations on tweets.
A note only appears publicly if it is rated as “helpful” by contributors from different viewpoints. Over time, this system has become one of the more trusted features on the platform, especially in the wake of broader content moderation challenges since Elon Musk took ownership of the company.
How the AI Will Be Used
Drafts generated by AI will not be published automatically in the pilot. Human contributors will review and edit (or reject) the posts before it is posted. Contributors will be given a head start with well-written, fact-based drafts to help scale the program, according to the company.
Bots are powered by large language models, which are the same kind of AI technology used in tools such as ChatGPT from OpenAI, Claude and Anthropic. The models are capable of swiftly summarizing information, identifying relevant context, and providing neutral language.
Speed vs. Accuracy?
This trial offers evident advantages. Content moderation tools must adapt to the fast-paced and expansive spread of misinformation. By utilizing AI, it can assist in organizing massive amounts of flagged content and aiding contributors who may find difficulty with tone, structure, or clarity when writing a note.
However, the risks aren’t negligible. Artificial intelligence can produce false or biased information with confidence, a phenomenon known as hallucination. While human supervision is still a factor in the Community Notes process, automated suggestions could potentially modify how topics are presented or highlighted.
There is also the issue of scale. The more AI is introduced in the pipeline, the more it becomes indispensable. What begins as a helping tool, could evolve into a gatekeeper, even if unintentionally.
A Measured Rollout
But it’s good that X is being cautious. AI-generated notes will be clearly marked and the final decision rests with human contributors. The company is keeping a close eye on the pilot’s progress and has limited scope for it.
However, this change is indicative of a wider industry trend where human moderation may be too slow for the speed of modern social media. This can vary depending on the context. The gap is becoming more and more pronounced, and platforms are increasingly turning to AI to fill it.
Final Thoughts
Speed versus trust are two common tech tensions in the pilot program of X. While AI can improve content moderation, it cannot replace the nuanced and consensus-building aspect of Community Notes.
As social media platforms continue to evolve, it is likely that fact-checking will never be the same again.