Mark Zuckerberg, CEO of Meta – the parent company of Facebook and Instagram – recently announced that Meta is ending its third-party fact-checking program and replacing it with a community-based crowdsourcing fact-checking system inspired by Elon Musk’s social media platform “X” and its Community Notes feature. Currently, any X user can sign up to be a Community Notes contributor if their account is at least six months old, has a verified phone number and has not recently violated the platform’s rules. Meta has not yet released specific details about requirements for their Community Notes-inspired system.
Zuckerberg stated that the professional fact-checkers previously used by Meta were “too politically biased and have destroyed more trust than they’ve created.” While social media platforms have implemented community-based fact-checking networks, these systems are insufficient for addressing misinformation because of their limitations in speed and scale, as well as potential bias. Therefore, advanced technologies such as natural language processing, sentiment analysis and artificial intelligence are needed for accurate, quick and objective fact-checking.
Community Notes allows users to flag posts they deem misleading or false and add any information they deem fit to provide more context. Before being published, a community note must be rated a “helpful” note by other contributors on the platform. X uses an algorithm that considers ideological diversity, meaning diversity of viewpoints, especially political ones, among contributors who voted on a note. If a group with diverse perspectives agrees that the note is helpful, it becomes visible to the public, but if the voters are too politically uniform the note will not be published.
In August 2024, Zuckerberg shut down CrowdTangle, a tool used by researchers, watchdog organizations and journalists to monitor social media posts and track how misinformation spreads throughout Meta platforms. Meta closed down CrowdTangle despite many protests highlighting the company’s movement away from transparency and accountability, suggesting to many a lack of commitment to combating the spread of misinformation on social media.
Moving away from professional fact-checkers has raised many concerns that the move could increase the spread of harmful content and create a decline in trust and safety on Meta platforms. According to ABC News, critics also see the move as an effort to appease President Donald Trump, who has repeatedly criticized Meta for alleged anti-conservative bias, as Meta is also reducing restrictions on discussion of topics such as immigration and gender. Additionally, the Los Angeles Times stated Zuckerberg is going out of his way to position himself in the president’s grace, knowing that Trump could help Meta in the race to develop artificial intelligence technologies.
Alex Mahadevan, the director of MediaWise, a digital media literacy initiative at the nonprofit Poynter Institute, suggested that while the Community Notes system could effectively complement a larger moderation strategy, it shouldn’t serve as X’s main defense against misinformation. “It’s essentially ineffective,” Mahadevan said, referencing the slow and low publication rates of Community Notes. “I mean it really just does not work.”
I have seen a multitude of false posts on social media in recent years. In the last week on TikTok and Instagram, thousands of users viewed a post claiming U.S. Immigration and Customs Enforcement (ICE) was using undercover ice cream trucks with cheerful music to lure undocumented immigrants outside. The side of the van seen in Las Vegas read “Ice Cream Patrol” and at first glance, I didn’t question it given the current tense political climate surrounding immigration policies. It wasn’t until three days later that the claim was deemed false and the TikToker who originally posted about the van apologized. The van was simply a regular ice cream truck bringing joy to the community, not enforcing deportation policies. This recent experience proved how quickly misinformation could spread and gain traction on the internet.
Myth Detector, a fact-checking platform by the Media Development Foundation, uses AI to detect harmful information spreading on digital platforms. To do this, they use a matching mechanism where professional fact-checkers label some content as false and AI searches for and finds similar false content online. According to Myth Detector, with more engineering and development, AI should be able to detect fake and deceptive media quickly and correctly without any human assistance.
This is not to say Community Notes are unable to make some progress in stopping the spread of fake news on social media. A study by the University of Illinois Urbana-Champaign found that Community Notes on X can convince users to retract false posts and reduce the reposts of misleading stories, with 50 percent of users retracting or not reposting misleading stories. Unfortunately for social media users, notes are often slow, taking hours or even days to be approved and appear on platforms as misinformation is already rapidly spreading.
Additionally, only a small percentage of proposed Community Notes are shown on the platform and often fail to reach most users. A Washington Post analysis found that only 79,000 of the more than 900,000 Community Notes written in 2024 were publicly shown. Furthermore, the system’s reliance on consensus from diverse viewpoints often fails on political issues, leading to many posts containing fake information going unchecked.
Although current AI tools can be less effective in certain situations, AI is still being developed and any issues with AI will likely be resolved. Ultimately, AI and advanced technologies are able to be more accurate, scalable and efficient in addressing the complex challenges of misinformation on social media than community-based fact-checking. AI and machine learning can play a heavy role in quickly spotting and flagging misinformation, while natural language processing (NLP) can use techniques such as sentiment analysis and semantic analysis to identify patterns and inconsistencies in the content, potentially finding misleading information. A collaboration between tech giants, artificial intelligence programs, fact-checkers, social media platforms, and researchers will be crucial for sharing data and improving detection methods to make social media more trustworthy.