On today's podcast episode, we discuss what artificial general intelligence (AGI) is capable of, why everyone is rushing to create it, and how close we are to reaching it. "In Other News," we talk about 'Ready Player One' becoming a metaverse experience and how we will start controlling our smart homes. Tune in to the discussion with our analysts Jacob Bourne and Gadjo Sevilla.
Subscribe to the “Behind the Numbers” podcast on Apple Podcasts, Spotify, Pandora, Stitcher, YouTube, Podbean or wherever you listen to podcasts. Follow us on Instagram
Made possible by
This episode is sponsored by StackAdapt, a multi-channel digital advertising platform used by marketers and agencies worldwide. Ranked as the number 1 DSP according to G2. StackAdapt, speed that makes the difference. Learn more about the Creative Builder at go.stackadapt.com/creativebuilder.
Episode Transcript:
Marcus Johnson:
This episode is made possible by StackAdapt. Marketers, StackAdapt has just the thing to get you back on your design team's good side. With the creative builder in StackAdapt, you can build multiple ad creatives and many sizes in just a few clicks. Plus, make real-time edits to your creatives right in the StackAdapt platform. Learn more at go.stackadapt.com/creativebuilder.
Jacob Bourne:
And so by saying this, basically what Meta is saying is, "Hey, look, we're at the forefront of open source AI and look what we're going to be building, and this is where you want to put your investment dollars. So, regardless of whether or not they reach it, I think it's good marketing for them.
Marcus Johnson:
Hey, gang. It's Monday, February 5th, Gadjo, Jacob, and listeners, welcome to the Behind the Numbers Daily: an eMarketer Podcast. Made possible by StackAdapt. I'm Marcus. Today, I'm joined by two folks. Let's meet them. The first one is based in California, one of our analysts on the connectivity and tech briefing. It's Jacob Bourne.
Jacob Bourne:
Hey, Marcus. Happy to be here.
Marcus Johnson:
Hey fella, thanks so much for hanging out today. We're also joined by another chap. He's living in New York City, though senior analyst on the same team, it's Gadjo Sevilla.
Gadjo Sevilla:
Thanks for having me back, Marcus.
Marcus Johnson:
Absolutely, sir. Absolutely, thank you gents for being here. Today's fact, Earth's rotation is changing speed.
Jacob Bourne:
Yay.
Marcus Johnson:
So usually... Wait, did you know this Jacob? Jacob's like, "Yeah, I've been tracking it."
Jacob Bourne:
Yeah, it's a bit of an issue for tech companies like Meta actually in terms of data center operations. Even a slight change can throw things off, so it's something of concern-
Marcus Johnson:
Interesting, you have been tracking it.
Jacob Bourne:
... For the tech industry.
Gadjo Sevilla:
Is it speeding up or slowing down?
Marcus Johnson:
So kind of both. So, Earth's rotation usually is slowing down. That means on average the length of a day increases by about basically two seconds every 100 years, so not by much. If you do the math, then a day used to last 21 hours if you lived on Earth 600 million years ago. So when people are like, "I could use an extra hour in the day," in 200 million years, you'll have one if we're still here, but I think it might be speeding up though. I think I read a BBC science article and it said in the last couple of years it's sped up, not by much though. Anyway, today's real topic, the race to create artificial general intelligence.
On today's episode, first in the lead, we'll discuss what AGI is and why it matters. Then in another news, we'll cover what metaverse experiences are most likely to initially move the needle and how we are going to control our smart homes. We start, of course, gents, with the lead, and we're talking all about artificial general intelligence. So, we'll talk a bit about what it is, why it's so important, when we will get there. The recent headline here is that Meta has joined Google and also the alliance of Microsoft and OpenAI in the race for artificial general intelligence, or AGI, the AI industry's holy grail according to Scott Rosenberg and Ryan Heath of Axios. But before we get into talks of holy grails, Jacob, what the hell is AGI?
Jacob Bourne:
Well, I think the first thing to note is it doesn't exist yet as a theoretical technology that is an AI that would be on par with or exceed human intelligence. So, I think a technical detail to note here is if you have an AI that's on par with human intelligence, because computers can think faster than the human brain, really, you effectively have something that exceeds human intelligence because it's going to be out-thinking us. Again, it doesn't exist. We don't actually know.
And the reason why this is being talked about is because recent generative AI advances, models like GPT-4, have what some people think of as elements of AGI. So, it's kind of like we're on the right track and we've gotten there suddenly and it's raising new questions, for example, well, what really constitutes an AGI? How will we know when we achieve it? The old test, for example, the Turing test isn't adequate for our current AI models because it doesn't distinguish between mimicking the human brain and actually equaling it or exceeding it. And it's also coming up because I think the risks around potential AGI are becoming more apparent, and so there's concern about, well, what if this thing can outsmart us, then will it go rogue and will it start doing things that we don't like? So, these are all good questions and coming to the fore now.
Marcus Johnson:
Gadjo, it seems like it's fair to say, as Scott Rosenberg and Ryan Heath of Axios put it, no one agrees on how to define AGI, and no one has a testable way of determining whether any given AI project meets the bar. So, part of the problem here is there is no definition.
Gadjo Sevilla:
Yes, that's right. There is no definition. It's aspirational, everyone has moving targets. I think the basic idea is that it can mimic human sensory and motor skills, performance, learning, intelligence. So, it could use these abilities to reason, solve problems, potentially understand abstract concepts, which machine learning can't do right now. And the end game really is sanctions because it'll carry out highly complicated tasks and ideas without human intervention. So, there's really no way to measure that. And back to what Jacob was saying, there are huge ethical questions surrounding this because again, as with all AI, you're wondering about the bias that's built into the data sets and the algorithms. So, what constitutes that super intelligence?
Jacob Bourne:
And to add to what Gadjo just said, OpenAI actually has set up an internal team to come up with this definition. How will we know an AGI when we develop it? And that's fine, but I think a lot of people in the industry also think, well, this can't just be something defined by one company. It has to be a broader effort to really define what this is.
Marcus Johnson:
And it does seem as though that is the case. I managed to cobble together a bunch of different "definitions" from different people. A simplified less than 10 words definition from the Axios piece is that AGI, or artificial general intelligence, is reproducing human level reasoning in chips and code. Another one from a Wired article from Reece Rogers suggests one definition could be an algorithm doing better than most humans at standardized tests, like the bar exam, and then there's another one from obviously a blog post from OpenAI CEO Sam Altman describing AGI as anything generally smarter than humans.
The way I like to think of this actually, comes from something that I believe you both told me on an episode last year, thinking about the different levels there are to AI and level one is generative AI, GenAI, and you told me that kind of sounds like a person. Level two is what we're talking about, AGI, artificial general intelligence, that reasons like a person. And then level three is sentient AI, thinks it's a person or thinks like a person. And then you're talking about this kind of theoretical future events like singularity, which is a point where AI would outstrip our ability to contain it.
Jacob Bourne:
Right.
Marcus Johnson:
You'll be glad to hear.
Jacob Bourne:
And it's hard to determine any of those things, especially the sentience, so how would you know? And as far as outperforming humans generally, well, some people think that GPT-4 already does that. So, these are really difficult things to measure in part because well, humans are kind of complicated, too. So, we're trying to measure something against humanity, it's a difficult thing.
Marcus Johnson:
It sounds dangerous. Why is everyone in a rush to get to this? Why is everyone in a rush to create AGI?
Gadjo Sevilla:
I think the company that sort of declares that they've achieved AGI, so they get to be at the forefront of AI development, but again, the difficulty there is the definition is they've achieved their vision of AGI or a general vision of AGI. That's still very elusive, but all these companies are racing to that point just because the demand for them to produce an upgraded version of AI is so high right now.
Jacob Bourne:
I think the idea here is that you would have the smartest mind on the planet if you had an AGI. Another reason is, this is something that OpenAI CEO Sam Altman seems to think, and it's that, well, this is a technology that could be weaponized by bad actors, and if someone's going to build it, it might as well be the good guys. So, let's build it before the bad guys get it.
Marcus Johnson:
That point, so OpenAI, they're actually founded as a nonprofit in 2015 specifically to develop AGI and make sure that it would "benefit humanity" and not destroy it. So exactly to that point, different visions for different people or different reasons to create this for different people. We talked about Meta getting into the game, and it appears that their vision for this is humans and AIs all hanging out together in the Metaverse. It sounds a lot like the movie Her with the Joaquin Phoenix, great film. There is this debate being had between open versus closed source. And so, Mark Zuckerberg's vision is to reach AGI and then open source it, so everyone can benefit, as he says. Others, like OpenAI, think the technology would or could be too dangerous in the hands of just anybody, preferring a more closed approach because of the safety benefits. Jacob, do you see this being open or closed in the future?
Jacob Bourne:
I think the prevailing view here is that it's going to be closed because there are massive risks to society, international security, but there's also something else going on. There's something called the Accelerationist movement in Silicon Valley that thinks... It's basically people who think, "No, there should be no restrictions whatsoever to AGI development, and that includes making it open source." Now with Meta, Meta is purchasing a massive amount of NVIDIA's AI chips this year with the hope of building an open source AGI, that was a recent announcement.
And I think there was definitely a lot of backlash to that announcement, people saying, "Look, this is not something that a single company should decide. This is something that the international community needs to decide because of the enormous implications." But I think here that for Meta, it's a way to get attention. Meta currently is a leader in open source generative AI. Open source AI is getting a lot of adoption from the enterprise. And so by saying this, basically what Meta is saying is, "Hey, look, we're at the forefront of open source AI, and look what we're going to be building, and this is where you want to put your investment dollars." So, regardless of whether or not they reach it, I think it's good marketing for them.
Marcus Johnson:
Open source, that vision isn't altruistic by any stretch of the imagination. Mr. Zuckerberg still wants to make money from this, and Alex Heath of The Verge is saying, "If Meta can effectively standardize the development of AI by releasing its models openly, its influence over the ecosystem will only grow." Gadjo, where do you land on open versus closed?
Gadjo Sevilla:
Definitely closed. I think technology of this magnitude does require a lot of not just regulation, but a certain level of control, which can only really be ensured when it's worked on in a closed environment. You can say it's exaggerating, but this is very much could be similar to the development of the atomic bomb, which means if everybody has access to it, there's no telling what could happen, especially at the early stage. So, I even see governments going so far as to stepping in once they get a whiff of, "Are we close to AGI? We need to know what this entails, how this affects everybody." So, I think as we reach that point, definitely it'll lean more towards closed. As far as the open aspect and Meta, that'll help speed up development, but I'm sure once they attain it, that's something they're likely going to keep under wraps. It's a competitive advantage after all.
Marcus Johnson:
Right, so a year ago, Microsoft researchers said they'd found "sparks of AGI" in OpenAI's latest large language model, but how close are we really to reaching AGI?
Jacob Bourne:
According to NVIDIA's CEO, Jensen Huang, we're about five years away. So other people, other experts in the field kind of put that number at a decade, 10 years from now we'll be there. But OpenAI actually, when a rumor first emerged that they were training GPT-5, part of the rumor was that internally at least they were calling it an AGI. Again, there's a marketing aspect to all of this, but here's the thing, AI is advancing rapidly.
And recently the Biden administration invoked the Defense Production Act, allowing the federal government to scrutinize advanced AI development and require AI companies to disclose safety training plans or safety results, and that actually goes into effect this week. Now, we don't know what comes out of that. Are they going to reserve the right to halt model training? But I think what that shows is the US government is seeing this AGI trajectory as a national security threat potentially. And the fact that it's stepping in now shows that, well, we are getting close. It could be five years, and in order for the government to kind of get a handle on this before we reach it, it's probably going to take them about that long. And so, that's why we're starting to see more regulatory efforts on that one.
Marcus Johnson:
Gadjo, Yann LeCun, Meta's top AI scientist, thinks we're nowhere near human level intelligence. It really depends on who you ask it seems. Some people think we're closer, some people think we're further away, but it goes back to that definition issue. Part of the problem is that we don't have a standard test for intelligence. Even as humans, we have the IQ tests, we have SATs, but that Axios article is noting they only measure a fraction of what we might think of as human brain power. So Gadjo, how close do you think we are to AGI?
Gadjo Sevilla:
I think a decade seems just about right in terms of development. Now, do consider that we're dealing with technology companies who have alpha builds, beta builds. So, they're going to be pushing out what they believe as part of an AGI when they feel that it's ready for prime time, even if maybe it's not. So, there'll be a lot of that, but to have a more complete and really well-developed AGI, I think would take about 10 years from where we are right now.
Jacob Bourne:
I want to add one final point, and that's that prior to ChatGPT's entrance on the commercial stage and the ruckus that caused, a lot of AI researchers and computer scientists actually thought that an AGI was more like 100 years away. So, just think about how much that changed in a short period of time just because a particular technology hit the market.
Marcus Johnson:
That's a great point. I'll end with this point. Suresh Venkatasubramanian, professor at Brown University was saying he gets frustrated when companies talk about concern over sentient AI, which is level three, past AGI, because he thinks that overshadows the real AI concerns of the present like bias for example. And there's a series of articles published in a collaboration between Lighthouse Reports and Wired laid out how an algorithm used in the Netherlands was more likely to recommend single mothers and Arabic speakers be investigated for welfare fraud. So, there are plenty of problems in the immediate term, well beyond AI taking over the planet.
Jacob Bourne:
And I think the other point there is that in terms of the concerns about AGI going rogue, a lot of scientists say it doesn't need to be sentient for it to go rogue. So, the sentient is really more of a, hey, do robots have rights, type of question.
Marcus Johnson:
Right. All right, gents, that's what we've got time for the first half-time for the second today. In other news, Ready Player One becomes a metaverse fan experience, and how are we going to control our smart homes in the not so distant future? Story one, "Ready Player One goes from the page and big screen to a metaverse fan experience," writes our senior director of Briefing Jeremy Goldman. He explains the AI Metaverse company Futureverse has launched something called Readyverse Studios, co-founded by Ready Player One author Ernest Cline and Dan Farah, producer of the film that netted half a billion dollars. The goal is to create a virtual universe that mirrors Ready Player One's dystopian world, working in partnership with Warner Bros. Discovery, but Gadjo, what metaverse experiences do you think are most likely to initially move the needle?
Gadjo Sevilla:
I think one of the biggest, I guess more commercial uses would be virtual events and performances, which have already been quite popular in Roblox. We've seen it in Meta's Horizon Worlds where they have NBA games and music concerts and people tend to go for these things. They're like a low-impact experience. So, I think those are, I suppose, the inroads into getting deeper into the metaverse because it's something that maybe you can't experience or is difficult to experience in real life.
Marcus Johnson:
It's hard to know what people would do in the future because you could argue that what they're doing currently is an indication of what will drive adoption, but maybe they'll do something that you just haven't thought of yet because what people are doing now, according to KPMG, the number one activity folks were getting up to last year in the metaverse was playing games, 48%, meeting with friends, 45, buying and selling things, 24, and then consuming or creating art, and then also working with others, each had 17%, but that's what people are doing now. Maybe they'll do something else that people just haven't invented quite yet.
Story two, controlling the smart home from not your phone. Jennifer Pattison Tuohy of The Verge thinks that there are two challenges that make the smart home a tough sell for folks. Number one, you need your smartphone to control things, and number two, getting your devices to work together can be tough. In the article, she points to a few innovations that are trying to help, but Jacob, how do you think we're going to control our smart homes?
Jacob Bourne:
It's a funny question because I think the real vision here is that it's all automated. Really, the vision I think is it responds to your needs without you having to do anything, but I think what we're going to be seeing is just flexibility. So it could be your smartphone, it could be a TV, it could be a special console, it could be your voice, it could be something else entirely. I think the issue really boils down to the fact that when smart home technologies first entered the stage, there wasn't any collaboration or coordination between tech companies.
And so, they have all these random products that don't work together and work in different ways, and then consumers have to do a lot of work and are confused. Now, there's been some movement towards the better trajectory there in terms of tech companies coming together on the matter standard, giving some interoperability between smart home devices, but I think there's more work to be done. Tech companies really have to work together on these smart home technologies to really kind of get to this vision of something that doesn't require any work on the consumer's part.
Marcus Johnson:
So this piece, it was interesting because a few examples she had, 3D map views of your home on your TV, so you can see and control your connected devices placed throughout by simply tapping the screen, so you get a bird's eye view of the entire home and the devices. And then the second thing here was Aqara's home copilot chatbot that you can ask to do things like set up an automation that turns your lights off at 10 PM, locks the doors, and lowers the shades all at the same time. But to your point, and as Ms. Tuohy says in this piece, the ambient smart home that understands context, for example, knows what brightness to set the lights based on who is in the room is still a long way out, but that's what we're going for.
Jacob Bourne:
And it could be something that AI helps us with understanding context of what generally AI does. And so, that could be the next thing we'll see.
Marcus Johnson:
See. Aqara's home copilot, they say that they're going for the ability to analyze the usage patterns and proactively suggest customized automations, which could include tailored plans for energy saving automation, to your point, likely using AI to do so. That is what we have time for, gents. Thank you so much for hanging out today. Thank you to Gadjo.
Gadjo Sevilla:
Thanks so much.
Marcus Johnson:
Thank you to Jacob.
Jacob Bourne:
Thank you. It was a great show.
Marcus Johnson:
Yes, indeed. Thank you to Victoria who edits the show, James Stuart and Sophie who do everything else, and thanks to everyone for listening in. We hope to see you tomorrow for the Behind the Numbers Daily: an eMarketer Podcast, made possible by StackAdapt.