The Daily: Consumer awareness of AI, generative AI's limits, and why AI search might be a disaster

On today's episode, we discuss the US public's everyday awareness of AI, whether generative AI is more than just a hyper-advanced predictive text tool, and whether AI search might actually be a disaster. Tune in to the discussion with our analysts Jacob Bourne and Yory Wurmser.

Subscribe to the “Behind the Numbers” podcast on Apple Podcasts, Spotify, Pandora, Stitcher, Podbean or wherever you listen to podcasts. Follow us on Instagram

Made possible by

Tinuiti

Analyze, engage, and influence your audience with Meltwater’s data-rich suite of social and media intelligence solutions. More than 27,000 organizations around the world use Meltwater to make better business decisions faster. Learn how at meltwater.com.

 

Episode Transcription:

Marcus Johnson:

Hey gang, it's Monday, March 6th. Yory, Jacob and listeners, welcome to the Behind the Numbers Daily, the eMarketer podcast made possible by Meltwater. I'm Marcus. Today, I'm joined by two folks. Let's meet them immediately. First of all, we have one of our connectivity and tech briefings analysts based out of California. It's Jacob Bourne.

Jacob Bourne:

Hello everyone. Great to be here.

Marcus Johnson:

Hello. Hello. And also we have with us, well our principal analyst who heads up our tech team based out of New Jersey. He knows what WaWa is. It's Yory Wurmser.

Yory Wurmser:

Hey, Marcus, how are you?

Marcus Johnson:

Hello. Hello. Good, thank you. Good, good. We were talking about WaWa because Victoria's obsessed and now I'm obsessed. She made me obsessed. If you don't know, now you know today's facts. Where did the term artificial intelligence AI come from? I don't really know, but this is what I found, so we'll see what you guys think. So to start off with British mathematician Alan Turing proposed a test, the Turing test that measured a machine's ability to replicate human actions to a degree that was indistinguishable from human behavior. So could a machine look like a human? It's the Turing test.

Later that decade in the mid-fifties, the term artificial intelligence is coined in a proposal for a two month, 10 man study of AI, which was submitted by John McCarthy of Dartmouth College, Marvin Minsky of Harvard, Nathaniel Rochester from IBM, and Claude Shannon from Bell Telephone Laboratories. The workshop, which took place in the summer of '56, the workshop happened the following year, is generally considered as the official birthdate of the new field.

If you actually know where the term AI came from and that's not it, then you might still be right. It was hard to nail it down, but I think this is where it came from. That, rather conveniently, is what we're talking about today. Today's real topic, consumer awareness of AI and why AI search might be a disaster.

So in today's episode, in the Lead we'll cover American's level of AI awareness, what ChatGPT can really offer the world and why AI search might actually be a disaster. No in another news today, there's too much to talk about in the lead. We'll start gents with everyday AI awareness. So a new Pew Research Center survey finds that many Americans are aware of common ways they might encounter artificial intelligence in daily life, like customer service chatbots and recommendations based on previous purchases. However, just three in 10 US adults were able to correctly identify all six uses of AI asked about in the Pew survey, which they say underscores the developing nature of public understanding. So basically they're saying that not many people were able to say, "Oh, there's AI in that. There's AI in that, there's AI in that," and get it correct.

But my question here, Yory I'll start with you, as Americans start to realize that more and more of their daily interactions already involve AI, will this make them more or less comfortable with using the technology in the future? And I guess we're talking about, oh, there's AI in song recommendations. Or, oh, there's AI in my Apple Watch, things like that.

Yory Wurmser:

I don't have a definite feeling on this, but I think they'll become more comfortable with it. They'll be freaked out at first, but if the utility of using AI or what AI can do for them is worth it to them, I think they'll adapt to it and become less freaked out rather than more freaked out.

Marcus Johnson:

Jacob, do you agree? Because it seems like it could be quite hard for people to separate AI into these different buckets because more folks were aware with certain types of AI. So two-thirds of Americans knew that there was AI in wearable devices, chatbots, product recommendations, only half so less knew that it was used to move certain emails over to a spam folder.

But when AI shows up in places like airports and they're using it to detect who should and shouldn't be allowed into the country, it might seem to me it seems like consumers are going to struggle to say, "Okay, AI is good over here, it's bad over here."

Jacob Bourne:

Yeah, I think first of all, I want to say that probably one of the reasons why there's this lack of awareness for certain AI systems is that there's a lack of regulation. And what that means is companies that are using AI for consumer products or in the public sphere don't have to disclose it. So especially if it's an AI program that runs in the background for something that's not obvious, like a chatbot for example, it's more obvious that you're interacting with an AI than with a wearable, for instance. So I think that's partly the reason why there's a lack of awareness.

I think going forward we're going to see increasing awareness with the education system. People are going to grow up knowing that AI is all around them, and I think what we're going to see is people becoming more desensitized to it while also being more aware of this presence. Now that might be a source of comfort, but I think the other thing we're going to see alongside of that is as AI gets more complex and it proliferates into the public sphere, I think people are going to become more aware of some of the ethical concerns because we're going to see them play out in the legal system, for example, with more algorithmic bias claims, more copyright infringement claims, things like that.

Marcus Johnson:

Yeah, more higher profile cases.

Jacob Bourne:

Exactly.

Marcus Johnson:

So yeah, you mentioned that more people that's going to become more comfortable with it as it becomes more prevalent. Unsurprisingly, from the study, adults who are frequent internet users score higher on the AI awareness scale than less frequent users and more and more people are using the internet, so that makes sense. At the moment, only 15% of people said they were more excited than concerned about the increasing use of AI in daily life. 38% were more concerned than excited, so imagine that you would see the scale start to tip there as well.

Also, not all AI created equal. Using AI to help produce drought and heat-resistant crops or detect skin cancer, over 50% of folks say major advancement. Using AI to write news articles, only 16% say major advancement. So people are going to start to or have to start thinking about AI differently depending on where it shows up in the world.

Jacob, what's generative AI? A term that's been all over the headlines as of late because of things like barge from Google and ChatGPT from OpenAI. What does generative AI actually mean?

Jacob Bourne:

Right, yeah, it's not new. It's been around for a few years, but certainly we've been actually hearing about it because companies are actually building commercial products using generative AI. Now up until recently, I think the AI that most people have been familiar with is called enterprise AI, which is basically the kind of data analytics and basic automation that we see running in the background.

I think generative AI is very different in that it's built in a similar way, uses data. It's trained on large data sets just like enterprise AI, but a crucial difference is that it doesn't necessarily just behave as it's programmed to generative AI models are able to use the data that they're trained on to spontaneously learn new things and they can use that spontaneous learning to create novel text, new images, audio, video, things that didn't necessarily exist before. Obviously it's all based on existing data, but what the models do is it synthesizes the data into new things didn't exist.

Marcus Johnson:

Okay. So Yory, Jim VandeHei and Mike Allen of Axios write the generative AI essentially scans previous writing on the internet to predict the most likely next words. And Addie Robertson of The Verge kind of agrees. He calls generative ai basically text generators that are amazing and beautifully powerful, but they're basically a version of your phone's keyboard's auto predict function. Is it fair to call ChatGPT or Bard or any others just a hyper advanced predictive text tool?

Yory Wurmser:

Yes, I think it is fair. The way these models work is they model the way you think, so they create patterns on how thoughts are usually generated in language and based on that they predict the next word. So yeah, basically it's a prediction engine for the next word or next few words given what comes before it. But it is pretty reductive just to say that's all it is. But basically, yeah, it's just a statistical output based on that.

Marcus Johnson:

And so thinking about AI in search and these different tools being used to search for things, Mateo Wong of the Atlantic was noting ChatGPT hasn't trained on and thus has no knowledge of anything after 2021 and updating any model with every minute's news would be impractical if not impossible to provide more recent information about breaking news or upcoming sporting events. The new Bing reportedly runs a user's query through the traditional Bing search engine and uses those results in conjunction with the AI to write an answer.

He writes, "So beneath the outer glittering layer of AI is the same tarnished, Bing, we all know and never use." Jacob, is he right? How useful is AI search at this point? Is it just being kind of packaged up to be, it's kind of like when 5G came out, it's like, "Oh, it's 5G." It's like it's not really, it's four and a half, but people were selling it as something more than it was.

Jacob Bourne:

Yeah, it's currently certainly creating a sensation on the internet. I think the evidence is there is, it's not quite ready to be a reliable search tool. I think that the accuracy is one big reason for that. I think that when you're generating list based results, like with standard Google search for instance, it provides links to authoritative sources where someone can find out their own accurate, hopefully sources to find out an answer to the question.

I think for AI search, it's just spitting out an answer and you don't really have, first of all, sources to go and verify if it's accurate or not. And these tools are showing to be quite inaccurate at times. I think for public search, we have to be mindful that people from all walks of life are going to be using this just like they use Google now. And so if these bots are spitting out not just inaccurate information but disturbing responses, and that's what we've seen with Microsoft Bing AI, for example, that's going to be a big issue for this really being a reliable search tool

Marcus Johnson:

Heading into the future. I'm wondering what this is going to look like or whether this takes a turn in another direction. What I mean by this is, you mentioned that you don't know where the answer's coming from a lot of the time. And bias was a major concern that Chirag Shah was a professor at the Information School at the University of Washington was saying, "It's especially tough for users to discern with ChatGPT generated answers because there is a less direct link to where the information in the box is coming from." Do you think these AI chatbots are going to head more in the direction of spitting out a bunch of options as opposed to spitting out the direct answer?

Jacob Bourne:

Yeah, I mean, one company that's trying to work around this is Neeva. It has its Neeva AI search platform that's actually showing sources as alongside the answers. I think that's going to be helpful if more Google, Microsoft and any other players do that, something similar to that. But because of the generative nature of the technology, it's synthesizing the data nm at times unique ways. Even with sources there, the way that the AI is synthesizing the data may or may not be accurate or providing a good well-rounded answer to the question. I suppose that offering more answers could help but say the answers contradict each other, is that really going to be a useful tool for people?

So I think another thing to really know about this is that the accuracy issues can be solved to a certain extent through more complex AI models, but the problem with more complex AI models is that it then becomes more difficult to predict what the AI is going to spit out. And so I think that's going to be a real stumbling block for these companies to make AI search really ready for primetime.

Marcus Johnson:

Yeah. Yory, speaking about where AI might go next, particularly with these chatbots we're talking about Mateo Wong, again of the Atlantic writing that, "Microsoft and Google believe chatbots will change search forever. That so far there's no reason to believe the hype." Mr. Wong continues saying, "Even if ChatGPT and its cousins had learned to predict words perfectly, they would still lack other basic skills. For instance, they don't understand the physical world or how to use logic. Are terrible at math and most germane to searching the internet, can't fact check themselves." Yory, when you're thinking about the next step, the next iteration of these types of chatbots, what do you expect to see in terms of the next evolution or development in AI search?

Yory Wurmser:

I think some of these problems are going to be solved relatively quickly and others are a little more complex. I think things like math is going to be solved pretty quickly by just integrating with another program that does math while forking some of the queries in different directions. I think something like you.com, another AI company, AI search company where he does something like that. So that's one area where I think you're going to see development.

The accuracy problem is the tough one to crack and to solve completely. I think that's what's going to hold this back. I do think that is resolvable. I think these models are improving rapidly each year. So I'm pretty optimistic that actually within a year or two we'll be using some of this. We'll actually be using generative AI. What we have to realize is to right now Bard, and Bard's not publicly released. Bing is just, you have to get on a waiting list. These are, these are betas, these aren't real products yet, and they still, I think they're going to develop pretty quickly and get fixed pretty quickly.

Marcus Johnson:

Yeah. Jacob, that tool you were talking about in terms of providing links to the, was that from OpenAI?

Jacob Bourne:

The sources is Neeva. Neeva AI.

Marcus Johnson:

Yeah. Okay. Yeah, because there was another tool. So OpenAI released a free web-based tool designed to help educators and others figure out if a particular chunk of text was written by a human or a machine. This was from Axios Ina Friese. I'm wondering then if we're going to see articles need to be labeled, "This was machine written, this was human written," because I feel like that could matter to folks.

Jacob Bourne:

We might see something like that come down the line. Right now the FTC is, they just hired a bunch of AI experts to their team and they've basically said that they're going to be scrutinizing this kind of technology very closely. And part of what they're going to be looking at is whether or not these products act as advertised. That's going to be a big hurdle I think for generative AI. As companies struggle to do so, I think the labeling is going to be a way that they can say, okay, here, just so you know, there's risk with consuming generative AI information, for example. And so labeling it so consumers know that they're interacting with an AI.

Yory Wurmser:

And you're, you've already seen companies like CNET that did write some articles using generative AI and just listed it as one of their staff writers getting a ton of blow back for doing that. So I expect people will want to know that it's AI generator, right.

Jacob Bourne:

Because there was a lot of inaccuracies with those articles they published.

Yory Wurmser:

Yeah, there was one from Ian [inaudible 00:17:09] of the Atlantic and the first three paragraphs of his article were written by, I think it was ChatGPT. And in the fourth paragraph he says, "Oh, by the way, the first three were written..." And I felt duped. I felt I was furious and I was like, oh, how did I not realize this? And so I think having a clear label will definitely help folks. Final question here, just to put a bow on this conversation.

So Charlie Warzel of the Atlantic says, "Its hard not to get a sense that we are just at the beginning of an exciting and incredibly fast moving technological era. So fast moving in fact that passing what we should be delighted about and what we should find absolutely terrifying feels hopeless. AI has always been a mix of both, but the recent developments have been so dizzying that we are in a whole new era of AI vertigo.

So Jacob, we'll go to you first and then to Yory second. When it comes to generative AI, what do you think we should be delighted about and what should we find absolutely terrifying at this moment?

Jacob Bourne:

Yeah, I mean, I think generative AI is a powerful technology that I think if used correctly and carefully could eventually, I mean right now it's still in the nascent stages, but for example, Bank of America just came out this week saying that by 2030 our current models like ChatGPT are going to be 1 million times more powerful. So if they're right, then in just a few years, this technology could be used to solve some pretty stubborn problems like for instance climate change,

Marcus Johnson:

Health research.

Jacob Bourne:

Health research, treating diseases which are currently incurable, coming up with better material science from possibly more environmentally friendly products, for example. And so those things are really exciting. I think on the other hand, I think the lack of regulation around this technology is a concern when you look at how fast it's accelerating. Right now, the number of big tech companies are lobbying the EU that's trying to pass its AI Act to not include restrictions for general purpose AI. And I think what they're targeting there is they want the full latitude to create what's known as AGI, artificial general intelligence, which is kind of what's on the minds of companies like OpenAI.

These chatbots are kind of like a stepping stone to something much greater for them. And of course the technology is nowhere near there, but knowing that that's what they're working on and that there's a lack of regulation around that is concerning considering that, again, once we get more advanced algorithms, they become more difficult to control. And I think that the lack of public awareness, lack of regulation makes what these companies are doing. A little bit concerning.

Marcus Johnson:

Yory, when you think about passing out, trying to pass out what to be delighted about, what to be terrified over, what comes to mind with regards to AI, generative AI?

Yory Wurmser:

Yeah, I mean I think Jacob hit some good points. I would say in terms of usefulness, it's pretty exciting. I think it'll make research a lot easier. I think it'll make data analysis a lot more accessible. I think it'll be really useful for generating ideas, brainstorming for creative ideas. I think in all those ways it'll simplify a lot of our lives and search. I think that down the line pretty soon is going to be pretty great.

In terms of frightening a lot of things Jacob said, the bias problems, the copyright infringement problems, and in the long run, the job destruction that's going to happen from this. I don't think that we're all going to lose our jobs, but I do think there's going to be some jobs to get shifted, some jobs to change, and a few jobs that disappear as a result. Some very writing or stuff that replicates sort of an easy pattern. I think that those type of jobs might disappear.

Marcus Johnson:

Yeah, bias being a major we've touched on. And the other one, so the potential for misinformation, new generative AI tools. Ones we've talked about they have the potential capable of releasing a vast flood of information online. This is Ashley Gold and Sarah Fisher of Access were saying this and they were pointing to alphabet's, 100 billion misstep after its Bard Chatbot messed up on a historical fact in a public marketing video meant to flaunt the tools sophistication.

The delighted part. Jen, I am interested to see in the short term, if we see more of a Ward Garden approach to AI search. So medical journals, if you are just pumping vetted research into a chatbot, whether like the reliability and the trustworthiness of it, it is going to be so much so that you don't mind getting answers from it because you know what you've put into it, so therefore you know what you're going to get out of it. Similar to research company platforms, we know what goes into our platform so we know what results are going to get spit out maybe retail websites. So can you see that happening in the short term? Yeah,

Yory Wurmser:

Yeah, absolutely. I mean, I think that's one of the solutions just to having very limited use cases at first and having kind of an AI that knows which direction to send a query to some of these more specialized ones, but you're still going to have some of these problems where it's taking correct information and synthesizing it incorrectly, and that's something they're going have to fix. But I agree that these more narrow type of searches are probably a first way that this is going to have a practical impact.

Jacob Bourne:

Yeah, absolutely. I think one thing to know is that if you're using better data, and I think you get better data partly by using smaller data sets, then you're going to get a better AI as a result. But again, to Yory's point, it's a predicted plus tool, right? So it's again, this unpredictable nature of it is always going to, some people call it hallucination. The AI will hallucinate and make things up, and I think that that inclination is going to be there. I think that's going to be a tough one to fix. And I think the biggest concern is that sometimes these things that the AI generates are not obvious. So it might be making something up that seems very plausible, but in actuality is some, it's completely fictitious.

Marcus Johnson:

That's what we've got time for the lead. It's time now for the post-game report. So gents, I'll start with Yory, then go straight to Jacob. What's your biggest takeaway from the episode? Yuri?

Yory Wurmser:

Regenerative AI is super exciting. It's probably will meet the hype, but probably not as quickly as some people are suggesting right now.

Marcus Johnson:

Jacob, how about for you?

Jacob Bourne:

Yeah, it's an exciting time. I think we're going to see massive changes as the technology advances. I think it's important to keep in mind that what we're single generated is it's an experimental technology really that's being commercialized very rapidly and I think it's, we're going to run up on a lot of societal issues as a result of that.

Marcus Johnson:

Well, next Monday, Jacob's going to come back and join us with Gajo who also covers this space. Speaking about business and AI monetization paths and how marketing could use it. That's all we've got time for today. So thank you so much to my guests. Thank you to Yory.

Yory Wurmser:

Glad to be here.

Speaker 5:

Thank you so much.

Marcus Johnson:

Hank you, Jacob.

Jacob Bourne:

Great. Thank you so much.

Marcus Johnson:

And thank you to Victoria who edits the show, James who copyedited it, and Stuart who runs the team. Thanks to everyone listening in. We'll see you tomorrow hopefully for the Behind the Numbers Daily. New Marketer podcast made possible by Meltwater.

"Behind the Numbers" Podcast