The European Commission announced sweeping new proposals aimed at heavily regulating artificial intelligence within the EU’s 27-nation bloc. As the first comprehensive legal framework specifically focused on regulating AI, the proposals would cast a wide regulatory net, with specific attention paid to “high-risk” AI applications that could threaten human safety or fundamental rights. The proposed framework also calls for outright bans on especially concerning AI applications, including social scoring systems like those used in China, AI systems that use “subliminal techniques” to manipulate people’s behavior or cause physical or physiological harm, and real-time remote use of facial recognition or other biometric identification systems in public spaces.
Under the proposed framework, “high-risk” AI use cases would include:
If passed, the proposed legal framework would cement the EU’s status as the world’s vanguard of tech regulation. The EU passed its landmark General Data Protection Regulation (GDPR) in 2016, which has subsequently become the standard by which major data privacy laws are compared around the world. That legislation helped lay the groundwork and provided a model for California’s Consumer Privacy Act (CCPA) and Virginia’s recent Consumer Data Protection Act. Like data protection laws before it, the EU’s proposed framework could serve as a legal catalyst for new AI legislation outside Europe.
Despite the US’ legislative patchwork, there are signs the US may be following Europe toward increased AI regulation. Fourteen US cities to date—including Boston, San Francisco, and most recently Minneapolis—have all either banned or placed moratoria on public facial recognition. Additionally, Illinois passed laws requiring employers to disclose the use of AI in interviews and recruitment, and Washington state is considering a bill that would limit government use of AI. Federal agencies are getting involved as well.
Perhaps most significant is the FTC’s blog post this week, published less than 24 hours before the EU announced its proposed reforms, which acknowledges research illustrating how AI tools can reflect and reinforce gender and racial biases, and claimed it would intervene if companies misused AI. Not mincing words, the FTC directly addressed companies using AI: “If you don’t hold yourself accountable, the FTC may do it for you.”