Connect with us

Net Influencer

Meta Ditches Fact-Checking, To Counter Misinformation With X-Like Community Notes

Platform

Meta Ditches Fact-Checking, To Counter Misinformation With X-Like Community Notes 

Meta CEO Mark Zuckerberg announced major changes to content moderation policies across Facebook, Instagram, and Threads, ending partnerships with third-party fact-checkers in favor of a community-driven system similar to X’s Community Notes.

The company plans to eliminate its fact-checking program, which launched in 2016 and included more than 90 organizations checking posts in over 60 languages. The program previously worked with certified fact-checkers from the International Fact-Checking Network and European Fact-Checking Standards Network, including U.S. organizations like PolitiFact and Factcheck.org.

“We built a lot of complex systems to moderate content, but the problem with complex systems is they make mistakes,” Zuckerberg said in a video announcement on January 7. “Even if they accidentally censor just 1% of posts, that’s millions of people, and we’ve reached a point where it’s just too many mistakes and too much censorship.”

The changes extend beyond fact-checking. Meta will adjust its automated content review systems to focus primarily on what Zuckerberg terms “high severity violations,” such as terrorism, child exploitation, drugs, fraud, and scams. Other violations will require user reporting before evaluation.

Meta’s trust and safety teams will relocate from California to Texas as part of the restructuring. The company is also removing certain content policies around immigration, gender, and other topics while rolling back restrictions on political content in user feeds.

Zuckerberg acknowledged these changes come with tradeoffs: “It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”

The announcement received an immediate response from X leadership. CEO Linda Yaccarino posted that the community notes model has been “profoundly successful while keeping freedom of speech sacred,” calling it “a smart move by Zuck and something I expect other platforms will follow now that X has shown how powerful it is.”

Meta’s previous fact-checking system identified potential misinformation through user responses and content spread patterns. Posts flagged as misinformation would appear lower in feeds while awaiting review from independent fact-checkers.

Joel Kaplan, Meta’s newly appointed Chief of Global Affairs, told Fox News that while the fact-checking partnerships were “well-intentioned at the outset,” there was “too much political bias in what they choose to fact-check and how.”

The Real Facebook Oversight Board, an outside accountability organization comprising academics, lawyers, and civil rights advocates, including early Facebook investor Roger McNamee, criticized the changes as “a retreat from any sane and safe approach to content moderation.”

The policy shifts affect two of Meta’s largest social media platforms, Facebook and Instagram, each maintaining billions of users globally. The tech giant will begin implementing these changes in the United States before expanding to other regions.

A recent UNESCO study found that only 36.9% of digital content creators verify information before sharing it with their audiences. About a third of the surveyed creators reported experiencing hate speech.

Avatar photo

David Adler is an entrepreneur and freelance blog post writer who enjoys writing about business, entrepreneurship, travel and the influencer marketing space.

Click to comment

More in Platform

To Top