YouTube, Facebook and Twitter, responding to marketers’ alarm over brand safety concerns, are ramping up efforts to block offensive content and prove they can be transparent about the process. Here’s how it’s working.
YouTube, Facebook and Twitter have faced a deluge of marketer objections over offensive videos and other problematic content on their platforms. In the past year alone, these services have found themselves at the center of controversies over everything from live streamed suicides to videos that promote terrorism to racist tweets from celebrities and others.
These incidents—combined with a backlash over a range of related issues including promoting fake news, providing a forum for abusive trolls, influencing the 2016 US presidential election and misusing customer data—have put the platforms on the defensive.
Over the past few months, each platform has taken steps to beef up security and promote brand safety, particularly around video. These efforts have involved a combination of human monitors and machine learning analytics—with an emphasis on the latter, given the scope of the issue.
The companies have also taken steps to provide more transparency to advertisers, regulators, users and anyone else wanting to know how the platforms deal with troublesome content.
There are indications that these companies are making progress in beating back the tide of offensive content through improved algorithms and a sharp focus on the issue, but most experts say no platform will ever be 100% safe as long as it traffics in user-generated media.
Despite ad boycotts, public outcries, regulatory scrutiny and punishing media coverage, the brand safety issues that have plagued these platforms have not had an apparent effect on their bottom lines or share prices so far. Marketers continue to see value in advertising on these platforms.
Here’s what’s in the full report
2files
Exportable files for easy reading, analysis and sharing.
10charts
Reliable data in simple displays for presentations and quick decision making.