YouTube, Facebook and Twitter have had no shortage of controversy in the past 18 months. The platforms have faced a battery of PR crises over ads placed next to objectionable videos, live-streamed murders and suicides, abusive trolls, offensive tweets, fake news, lapses in data privacy, and their purported roles in affecting the outcome of the 2016 US presidential election.
After many high-profile events that produced public outcries and government scrutiny, advertisers canceled, or at least paused, their campaigns on the platforms.
Now the three companies have responded by beefing up their efforts to police content. This involves human intervention—YouTube and Facebook each hired thousands of new staffers to actively monitor content—but it primarily depends on machine-learning algorithms that are designed to block offensive videos, posts and tweets before they go live.
How is this effort going so far? YouTube, Facebook and Twitter are helping answer the question by providing new visibility into their monitoring practices. In April 2018, YouTube issued the first in a series of quarterly reports that shed light on how many videos it flags, by what method and how quickly it flags them, and what genres they fall into.
The latest eMarketer report, "Policing Video Content on YouTube, Facebook and Twitter: Platforms' New Efforts to Block Offensive Clips Explained," lays out all the ways these social giants are cleaning up their acts.