Facebook CEO Mark Zuckerberg proposed a series of reforms to Section 230 of the Communication Decency Act in prepared testimony released ahead of Thursday’s US House of Representative hearing on misinformation and social media. Titled “Disinformation Nation,” the hearing marks the first appearance of Big Tech’s leaders before Congress since rioters—precipitated by misinformation circulating on tech platforms—stormed the Capitol on January 6. For context, Section 230 shields platforms from liability for content posted by users, and gives platforms the power to regulate and remove user content.
In his prepared testimony, Zuckerberg suggested making Section 230’s liability protections conditional on platforms’ ability to implement “best practices” to combat misinformation. Zuckerberg’s proposal would require platforms to demonstrate they have systems in place for identifying and removing unlawful content—something Facebook just so happens to be well-equipped to do with AI moderation tools.
Facebook’s growing ability to outsource content moderation to increasingly sophisticated AI puts it in a unique position to model compliance with Zuckerberg’s proposed reforms. Facebook currently relies on a mixed approach to content moderation, in which AI flags harmful content for a small army of around 15,000 contract workers who adjudicate its removal. For context, Facebook last year claimed its AI alone found and removed 95% of hate speech on its platform. In the future, the company hopes to reduce or eliminate the need for human contractors with developments in machine learning and AI capabilities, like its self-supervised SEER AI system.
Calls to reform Section 230 have increasingly gained bipartisan support from lawmakers, as well as from tech giants like Facebook. Both Republicans and Democrats—along with President Joe Biden—have signaled support for reforming or outright repealing the law. Though Section 230 remains widely misunderstood, a 2020 Accountable Tech poll found 57% of US voters said platforms should face liability in some situations. Zuckerberg himself signaled hesitant support for potential 230 reform during a congressional hearing last year, and Facebook has since run ad campaigns claiming the company supports “thoughtful changes" to the law.
Zuckerberg’s proposed 230 reforms stand to benefit Facebook and other incumbent platforms at the expense of smaller firms unable to build out expensive AI moderation tools. The proposals outlined in Zuckerberg’s testimony drew the immediate ire of critics who claimed such changes would only serve to benefit Facebook and other large, established firms.
They have a point: