The 2024 US presidential election marked a before and after in how social media companies police content and promote brand safety on their platforms.
Some of the most prominent social media companies—including X, Meta, YouTube, and TikTok—are relying on their users to flag violative and potentially misleading content. While those and other platforms continue to use moderation teams and automated systems, community-based efforts are becoming the norm for lower-severity violations.
X shifted to a user-driven content moderation policy soon after businessman Elon Musk completed his acquisition of the platform in October 2022 (when it was still called Twitter). Within months, he disbanded Twitter’s Trust and Safety Council, gutted the company’s internal moderators, and launched the Community Notes moderation program (an extension of an earlier pilot called Birdwatch). Community Notes is now the bedrock of X’s moderation effort for organic posts and allows notes to be posted on paid ads.
Meta followed suit with its own Community Notes program in March 2025, which uses X’s open-source technology. Meta uses Community Notes for content that could be misleading or missing context. Notes can be applied to posts on the company’s Facebook, Instagram, and Threads platforms—but not on paid ads. Reddit also empowers its users to flag potentially undesirable content.
YouTube, TikTok, and Meta are among the platforms that use human moderators to train machine learning (ML) systems to identify content that violates their community guidelines. On those platforms, content flagged by the ML systems is then reviewed by humans. Both platforms also allow users to flag content themselves.
Snapchat and Pinterest haven’t radically changed their methods. They still rely on hybrids of platform-driven automation and human review when necessary. Both have directly or indirectly distanced themselves from the community-based moderation that now prevails on X and Meta. Snapchat co-founder and CEO Evan Spiegel criticized those platforms by name with regard to their moderation efforts, and Pinterest launched a site that encourages marketers to “make the switch” to its “safer” platform.
The post-election brand safety landscape is a dynamic one, and more pivots could be in store. But regardless of how things play out, marketers should proactively build more internal safeguards—e.g., better contextual targeting, well-trained genAI systems—to protect their brands, and social media companies should keep sight of the interests of their core constituencies: advertisers and users.