The news: Meta’s Safety Advisory Council wrote an open letter criticizing the social media company’s decision to cut its fact-checking program, calling it “a concerning departure from Meta’s history of leadership and innovation in proactive harm prevention.”
The Safety Advisory Council, which was founded by Meta in 2009, is a group of independent online safety organizations and experts that consults with the company on public safety issues.
Community notes concerns: While crowd-sourced fact-checking can help address misinformation, the council expressed concerns about its effectiveness, especially with the rise of AI-generated content.
Scaling back the rules: In early January, Meta said it was removing restrictions on topics like immigration and gender to scale back its response to “societal and political pressure” and avoid obstructing free expression.
Our take: Meta has the opportunity to use its AI models to enhance the community-notes feature and expedite screening of polarizing topics. Detailed reports on enforcement decisions could help users and advertisers understand Meta’s evolving brand identity and keep them engaged and on its platforms.
This article is part of EMARKETER’s client-only subscription Briefings—daily newsletters authored by industry analysts who are experts in marketing, advertising, media, and tech trends. To help you start 2025 off on the right foot, articles like this one—delivering the latest news and insights—are completely free through January 31, 2025. If you want to learn how to get insights like these delivered to your inbox every day, and get access to our data-driven forecasts, reports, and industry benchmarks, schedule a demo with our sales team.