The news: Meta said its content moderation practices are being over-enforced and are obstructing free speech.
Meta’s concession comes after Threads and Instagram users’ accounts were wrongly deactivated or restricted for sharing content on controversial topics or mistakenly marked as being under the platforms’ minimum age requirements.
Deepfake threats: Meta, TikTok, Snap, X, and other major social platforms geared up for an influx of AI-generated misinformation ahead of the US election, but that threat may not have affected Meta users as deeply as was feared.
What’s next? Nick Clegg, Meta’s president of global affairs, called its policies a “living, breathing document” but didn’t clarify how content moderation operations will change going forward.
Political influence: Meta’s statements come a month before the US presidential transition. President-elect Donald Trump has repeatedly criticized moderation policies on social media platforms.
Our take: Meta’s incremental adjustments to improve moderation could ease tension with frustrated users, and its approach may reflect the challenge of balancing automation and accountability.
This article is part of EMARKETER’s client-only subscription Briefings—daily newsletters authored by industry analysts who are experts in marketing, advertising, media, and tech trends. To help you finish 2024 strong, and start 2025 off on the right foot, articles like this one—delivering the latest news and insights—are completely free through January 31, 2025. If you want to learn how to get insights like these delivered to your inbox every day, and get access to our data-driven forecasts, reports, and industry benchmarks, schedule a demo with our sales team.