In response to the increasing prevalence of AI-generated images and videos on its platform, Twitter is experimenting a potential solution to help users identify potentially misleading media. The social media giant is “piloting a feature” called “Notes for media,” which aims to apply crowd-sourced fact checks to specific photos and video clips.
The new feature allows highly rated Community Notes contributors to apply labels to images shared within tweets. Similar to notes on tweets, these labels would provide additional context to images, such as indicating if a photo was created using generative AI or if it has been manipulated in any way.
Explaining the feature, Twitter stated, “Notes attached to an image will automatically appear on recent & future matching images. It’s currently intended to err on the side of precision when matching images, which means it likely won’t match every image that looks like a match to you.”
Notably, CEO Elon Musk emphasised that the Community Notes feature would apply to all users, including himself, prominent figures, advertisers, and even heads of state. Musk stated, “Anyone making materially false statements on this platform will get Community Noted, including you, me, Tucker, advertisers, head of state, etc. No exceptions.”
How does it work? Contributors with a Writing Impact rating of 10 or above will see a new option on certain tweets to mark their notes as “About the image.” This option can be selected when they believe the media itself is potentially misleading, regardless of the tweet it is featured in.
When someone rates a media note, the rating is associated with the tweet on which the note appeared. This allows Community Notes to identify cases where a note may not apply to a specific tweet.
Raters and readers will see notes that authors marked as “about the image” slightly differently, making it clear that they are intended to provide context about the media itself, not the specific tweet. Ratings can help identify cases where a note may not apply to a particular tweet.
Twitter warns that tagging notes as “about the image” makes them visible on all tweets containing the same image as identified by their system. While helpful notes accumulate view counts from all tweets they appear in, they count as only one Writing and Rating Impact for the authors and raters.
The ultimate goal is for notes to automatically appear on “recent and future” instances of the same image, even if shared by different users in new tweets. However, Twitter acknowledges that perfecting image matching will take time. The company aims to expand coverage while avoiding erroneous matches.
Currently, Twitter’s feature only supports tweets with a single image. However, the platform is actively working on expanding its functionality to include videos, tweets with multiple images, GIFs, and other media formats.
Twitter is not alone in addressing the challenges posed by AI-generated content and the spread of misinformation. Google, during its I/O 2023 keynote introduced an “About this image” feature to help users track an image’s history in search results, aiding in determining whether a photo has been manipulated.