The 2024 US presidential election marked a before and after in how social media companies police content and promote brand safety on their platforms.
Some of the most prominent social media companies—including X, Meta, YouTube, and TikTok—are relying on their users to flag violative and potentially misleading content. While those and other platforms continue to use moderation teams and automated systems, community-based efforts are becoming the norm for lower-severity violations.
These changes have profound implications for brand marketers, agencies, and social platforms themselves.
Key Question: What are the implications of major social media companies abandoning platform-led content moderation in favor of community-based efforts?
Key Stat: Several social media companies have shifted emphasis from in-house content moderation to user-based efforts, at least for content that doesn’t explicitly violate their community standards. This shift marks a new era in brand safety.
This report can help you:
Exportable files for easy reading, analysis and sharing.
Reliable data in simple displays for presentations and quick decision making.
First Published on Apr 16, 2025