Meta’s safety council raises red flags over misinformation risks after moderation shift

The news: Meta’s Safety Advisory Council wrote an open letter criticizing the social media company’s decision to cut its fact-checking program, calling it “a concerning departure from Meta’s history of leadership and innovation in proactive harm prevention.”

  • The council said Meta’s decision to implement a community-notes model places an “unreasonable burden” on users to navigate harmful content and manage moderation themselves.
  • The open letter also touched on how scaling back topic restrictions could affect marginalized groups that are already targeted disproportionately online, including women, queer communities, and immigrants.

The Safety Advisory Council, which was founded by Meta in 2009, is a group of independent online safety organizations and experts that consults with the company on public safety issues.

Community notes concerns: While crowd-sourced fact-checking can help address misinformation, the council expressed concerns about its effectiveness, especially with the rise of AI-generated content.

  • “It’s unclear how Meta has weighed these challenges against the potential benefits. … Fact-checking serves as a vital safeguard, particularly in regions of the world where misinformation fuels offline harm,” the letter said.
  • The council pointed to studies on similar initiatives, such as X’s community-notes program, which found polarizing issues often can’t gain a consensus, leaving misinformation unchecked.

Scaling back the rules: In early January, Meta said it was removing restrictions on topics like immigration and gender to scale back its response to “societal and political pressure” and avoid obstructing free expression.

  • “We want to undo the mission creep that has made our rules too restrictive. … It’s not right that things can be said on TV or the floor of Congress, but not on our platforms,” Joel Kaplan, Meta’s chief global affairs officer, said.
  • Kaplan added that Meta was going to focus its content moderation on more “high-severity” violations, like terrorism, child exploitation, and scams, and will only take action on less severe policy violations if users personally report it.

Our take: Meta has the opportunity to use its AI models to enhance the community-notes feature and expedite screening of polarizing topics. Detailed reports on enforcement decisions could help users and advertisers understand Meta’s evolving brand identity and keep them engaged and on its platforms.

This article is part of EMARKETER’s client-only subscription Briefings—daily newsletters authored by industry analysts who are experts in marketing, advertising, media, and tech trends. To help you start 2025 off on the right foot, articles like this one—delivering the latest news and insights—are completely free through January 31, 2025. If you want to learn how to get insights like these delivered to your inbox every day, and get access to our data-driven forecasts, reports, and industry benchmarks, schedule a demo with our sales team.