Leading AI researcher gives sobering warning about OpenAI’s AGI ambitions

The news: Eliezer Yudkowsky, a founder in the field of artificial general intelligence (AGI) research, is saying to “shut it all down.”

  • Yudkowsky, co-founder of the Machine Intelligence Research Institute, says the letter urging a six-month moratorium on training AI models more powerful than GPT-4 doesn’t go nearly far enough, per Time.
  • He said he’s joined by others in the AI field who have privately concluded that the most likely result of building an AGI is that literally everyone on Earth will die and that it’s “the obvious thing that would happen.”

The problem: Companies like OpenAI and DeepMind are going full throttle on developing AGI—systems that surpass human intelligence—but no one understands how current advanced AI models work.

  • University of California, Berkeley, professor of computer science Stuart Russell said he asked Microsoft whether GPT-4 has internal goals of its own that it’s pursuing. The response was: “We haven’t the faintest idea.”
  • A truly safe AGI might be impossible unless its inner workings can be explained and aligned with human values.
  • Strained global diplomacy and a tech arms race between the US and China might make an international agreement to halt advanced model training a long shot.

Do we need AGI? Widespread workforce disruption and human extinction are high-stakes consequences for a technology that we probably don’t need.

  • Humans are skilled generalist thinkers, and thanks to evolution, we could collectively get even smarter with more investments made in human learning as opposed to machine learning.
  • The gaps in our intellectual capabilities involve difficulties in solving specific problems like climate change, disease, and space travel.
  • Instead of building AGI, humans could put more resources into small, focused AI models adept at specific use cases like discovery of new drugs and materials while ensuring we maintain control over AI.

"Behind the Numbers" Podcast