The news: The United Nations human rights commissioner is calling for a worldwide moratorium on public facial recognition and other AI tech the agency claims violates human rights.
- Those calls were part of a larger UN report that argues countries and businesses have advanced AI technology too quickly without first laying down appropriate guardrails to prevent discrimination and other harms.
- The proposed moratorium would apply to real-time facial recognition used in public spaces, and government social scoring systems (like those used in China) that use AI tools to assign behavioral scores to individuals.
How we got here: A report released earlier this year from the US Government Accountability Office (GAO) revealed that six US federal agencies used facial recognition tech on images of protestors from the 2020 George Floyd protests to verify their identities, ultimately inspiring a bipartisan group of US house lawmakers to reignite calls for comprehensive AI regulation.
- Federal law enforcement also reportedly used Clearview AI and other facial recognition services to track and arrest participants in the January 6 Capitol Hill riots, per Bloomberg.
- Meanwhile, in China, advocates have accused China of using its social credit algorithm—in tandem with facial recognition and other AI tools—to persecute minority groups.
Who’s taking action? The UN’s call for a total moratorium comes on the heels of growing facial recognition bans in some US cities and self-imposed moratoriums from some of Big Tech’s most influential companies.
- At least 14 US cities and two states have issued city-wide bans of facial recognition with varying degrees of severity.
- Earlier this year, Amazon announced it would indefinitely continue its moratorium on police use of its Rekognition facial recognition service.
- Amazon’s original announcement paved the way for similar moratoriums at Microsoft and IBM.
And it’s not just the US making moves: The European Union is reportedly considering new rules that would effectively outlaw AI used for mass surveillance or for ranking social behavior.
-
Over 40% of adults in the US, Australia, France, India, and six other countries surveyed in a 2020 Norton LifeLock report all said they thought facial recognition, in particular, will do more harm than good.
What’s next? Some degree of large-scale AI regulation in the US and EU appears not a matter of if, but when.
Though civil liberty and privacy advocates will welcome such changes, emerging AI companies like Clearview AI will likely argue such restrictions would hamstring US AI innovation at a time out of increased competition in the space globally.