YouTube, Facebook and Twitter have had no shortage of controversy in the past 18 months. The platforms have faced a battery of PR crises over ads placed next to objectionable videos, live-streamed murders and suicides, abusive trolls, offensive tweets, fake news, lapses in data privacy, and their purported roles in affecting the outcome of the 2016 US presidential election.
After many high-profile events that produced public outcries and government scrutiny, advertisers canceled, or at least paused, their campaigns on the platforms.
Now the three companies have responded by beefing up their efforts to police content. This involves human intervention—YouTube and Facebook each hired thousands of new staffers to actively monitor content—but it primarily depends on machine-learning algorithms that are designed to block offensive videos, posts and tweets before they go live.
How is this effort going so far? YouTube, Facebook and Twitter are helping answer the question by providing new visibility into their monitoring practices. In April 2018, YouTube issued the first in a series of quarterly reports that shed light on how many videos it flags, by what method and how quickly it flags them, and what genres they fall into.
The latest eMarketer report, "Policing Video Content on YouTube, Facebook and Twitter: Platforms' New Efforts to Block Offensive Clips Explained," lays out all the ways these social giants are cleaning up their acts.
In the latest episode of eMarketer's "Behind the Numbers" podcast, analyst Paul Verna discusses how big video platforms are policing their content.
Similarly, Facebook issued a report on its monitoring practices and has said it will provide periodic updates. Twitter has also taken steps at providing greater transparency about its policing efforts. From these early indications, it appears all three companies are getting better at automatically blocking content before it’s seen by a significant number of users. It’s also clear they have more work to do to refine their filtering algorithms.
The moves were welcomed by marketers, even if some observers point to indications that brand safety controversies were overblown. Many brands quickly and quietly returned to advertising on YouTube, Facebook and Twitter soon after PR flare-ups. And in any event, whatever boycotts occurred seemed to have no effect on the bottom line.
Each of these companies delivered robust business results during the 15-month period from Q1 2017 through Q1 2018, inclusive. YouTube’s parent company, Google, posted a 25.8% revenue increase and a 33.1% hike in its share price during this time, despite a raft of unwelcome publicity. The other platforms experienced a similar contrast of combustible events and positive metrics.
Although YouTube, Facebook and Twitter bear responsibility for ensuring their platforms are brand-safe, marketers also have a role to play in monitoring the health of the digital advertising ecosystem, according to experts interviewed by eMarketer.
Andrea Ching, CMO at advertising analytics firm OpenSlate, said, “It's critically important that brands lean into understanding their own suitability guidelines and make sure they have a strategy in place that where they advertise meets their standards. Brands and marketers have to make sure they have the types of partners that can help them deliver on those standards across all these major platforms at scale.”