Over 350 AI executives warn of AI’s ‘risk of extinction’

The news: Over 350 artificial intelligence industry leaders, including those from OpenAI, Google DeepMind, and Anthropic, have banded together to warn that AI could lead to extinction. 

The statement reads, in part, “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

A call for guardrails: The latest industry-wide warnings suggest a global prioritization in AI regulations, increased research and cooperation among developers, and the establishment of an international AI safety organization similar to the International Atomic Energy Agency.

  • Over 1,100 people signed an open letter in March asking all AI labs to pause development for six months. AI lab leaders did not sign that letter.
  • Eliezer Yudkowsky, a founder in the field of artificial general intelligence (AGI) research, said in April to “shut it all down.”
  • Geoffrey Hinton, the “godfather of AI,” resigned from Google recently to “freely speak out about the risks of AI.”
  • Microsoft, a leading investor in OpenAI that has laced its range of products with AI integrations, urged the establishment of a new government agency to oversee the regulation of AI.
  • The UK’s CMA, the FTC, and the White House initiated regulation on AI’s use and effects early May.

Why it’s worth watching: Recent breakthroughs and the frenetic pace of adoption in large language models have intensified fears about AI spreading misinformation as well as its impact on job displacement.

Our take: A call for regulation by first-movers and leading AI companies could allow them to seize the narrative, adversely affecting startups that could find themselves shackled by future regulation.

But regulating machine learning, artificial intelligence, and generative AI is nearly impossible due to their complex nature and decentralized implementation.

"Behind the Numbers" Podcast