On today's episode, we discuss how AI search ads will most likely work, how big of a problem reputational brand damage could be in an AI-content-generated world, and what generative AI can do for companies today. Tune in to the discussion with our analysts Jacob Bourne and Gadjo Sevilla.
Subscribe to the “Behind the Numbers” podcast on Apple Podcasts, Spotify, Pandora, Stitcher, Podbean or wherever you listen to podcasts. Follow us on Instagram
Analyze, engage, and influence your audience with Meltwater’s data-rich suite of social and media intelligence solutions. More than 27,000 organizations around the world use Meltwater to make better business decisions faster. Learn how at meltwater.com.
Episode Transcript:
Marcus Johnson:
Hey gang, it's Monday, March 13th. Gadjo, Jacob, and listeners, welcome to the Behind the Numbers daily, an eMarketer podcast made possible by Meltwater. I'm Marcus. Today I'm joined by two folks. Let's meet them. We start with one of our senior analysts who writes the connectivity and tech briefings based in New York. It's Gadjo Sevilla.
Gadjo Sevilla:
Hi, everybody.
Marcus Johnson:
Hey, fella. We're also joined by one of our analysts on the connectivity and tech briefings based in California. It's Jacob Bourne.
Jacob Bourne:
Hey, everybody.
Marcus Johnson:
Hello, hello. So gents, today's fact. Where did Dr. Pepper come from? A question I'm sure you asked yourselves daily. Well, in case you have been, Dr. Pepper was invented in Waco, Texas in 1885 by a pharmacist named Charles Alderton while working at Wade Morrison's old corner drugstore. Alderton noticed how patrons loved the sweet smell of the soda fountain and decided to create a drink that tasted like that familiar smell. And apparently, Wade Morrison, the drugstore owner, named it Dr. Pepper after Dr. Charles Pepper, a Virginia doctor who was the father of a girl that Morrison was once in love with. So he named it after the girl he was in love with's father. I'm sure his now wife, if he got married, loves that. But that's where it apparently came from.
The name's up for debate or in dispute, because this is according to Dr. Pepper's website, but I did see a few other theories out there. But that's where it came from. Also, something to come out of Waco is Jake's Texas Teahouse, which is one of the best diners in the whole country. I've been there once. I'll be stopping through this weekend on my way up north. So I'll see you soon, Waco. Anyway, today's real topic; marketers and companies developing relationship with AI.
In today's episode, we will be covering how AI search ads work, the potential reputational damage that AI generated content could do to a brand, and also what ChatGPT can do for businesses today. That's all in the lead, no in other news today. So we'll start with AI search ads, gents. So Catherine Perloff and Patrick Kulp of Adweek write that, "As Microsoft and Google race to integrate the next wave of language AI into their respective search engines, the push has left questions about what these new conversational formats will mean for how ads are served. AI systems like ChatGPT could offer folks a more interactive way to access internet information, but how Microsoft and Google will monetize their investments in this tech with paid placements is so far unclear to ad buyers." So gents, how are AI search ads most likely to work?
Gadjo Sevilla:
I can start. I think similar to the way Bing Chat works now. So you put in keywords, it generates a response based on the AI. So it could do the same with either links, ads, or even video content from within that search page.
Jacob Bourne:
I was going to say that I think there's going to be a lot of experimentation we can anticipate about how it's going to be integrated, and I think we see that in the internet as a whole. Ads get crammed in every nook and cranny. I don't think there's any reason to think that generative AI is going to be any different. I think we're going to see companies try and experiment with all types of ad placement, and it actually might end up annoying some users. And so we'll see probably some premium offerings where users can pay to get ad-free experiences.
Marcus Johnson:
So two ways this could be different to potential concerns here in terms of AI search ads. One remains about how monetization will fit into the flow of a chatbot conversation. So when, and what will... After every question do you put links? How's that going to look in the flow of a conversational text box or a voice conversation? And then the second is pricing. So if there're likely to be fewer impressions in a chat versus a search query, because conversations are more complex. Any thoughts on those two concerns, gents? The where to put ads in a chatbot conversation and also the pricing issues?
Gadjo Sevilla:
I think the placement of the ads, if it's something like a discussion, they could hand you off to the client's chatbot, and that could be done seamlessly. And the next thing you know, you're interacting with the brand's chatbot. And so that drives you deeper into that discussion.
Jacob Bourne:
I think in terms of the pricing, this is probably going to be a tough one to pencil up for tech companies running these programs. I think generative AI has a reputation for having very high compute costs. And so we might actually see ad revenue not quite suffice to pay those bills. And one of the things we see from Microsoft so far is that, starting in May, it's hiking its prices for its Bing Search API by as much as 9%. And the reason for that is to try and help pay for these search improvements. So while ads are going to be, I think, always be an important part of this puzzle for internet companies, I think it's going to be tricky for these AI search bots, and it might not quite fully do the trick in terms of generating enough revenue.
Marcus Johnson:
Interesting. So Royce is recently reporting that Microsoft was speaking with marketers about what its AI ads might look like as within the Bing chatbot, they were saying could be featured higher on the page than traditional search ads. And then ad group Omnicom pointed out that search ads could generate lower revenue in the short-term if the chatbots take up the top of search pages without including any ads. And so there's lots to work out here. Another Adweek piece though, gents, by Mr. Kulp, Patrick Kulp, notes the rise of AI content generation stirs brand reputation fears, with Gartner predicting that 80% of marketers will deal with content authenticity issues by 2027. 80% of marketers dealing with content authenticity issues by 2027 because of the rise of AI content generation. How big of a problem will potential brand reputational damage become in an AI content generated world, and what can brands do about it? Jacob, I'll start with you.
Jacob Bourne:
I think this is going to be a massive problem, and it's going to come from various sources. The first source is actually going to be internal. So these sort of AI products are supposed to save companies time. And so that might mean internal quality control issues as marketing teams generate these ads. So I think there's a role there for just internal vetting of what they produce. But then I think there's going to be external issues, where a firestorm, potentially, of AI generated spam content could be used to impersonate brands. And that's going to be a huge problem to deal with. And I think for the companies themselves, they really going to depend on the ISP providers and search engines to deal with that problem.
I think another problem that we might see in terms of placing the ads right in with the chatbot responses is that we might see instances where ads appear alongside problematic content, or content that the brand doesn't really want to be associated with. And given generative AI's unpredictable nature, that could also be another issue that might be difficult to deal with.
Marcus Johnson:
Because there's also just a lot more. A great point. It's just a lot more content that can be created in a generative AI world. And so more content to sift through for a reputational management sake. And when things go wrong, it doesn't take much to really tarnish a brand. And there was one example when things go wrong in an AI world. So KFC's German arm sending out a promotion based on a Holocaust reference in November of last year, which it blamed on an automated system within the company. So you've got to check content coming from within the company, as you mentioned Jacob, and the content coming elsewhere as well.
There is an initiative here though. Content Authenticity Initiative formed in 2019 by brands, tech companies, and media outlets to develop technical standards and tools to distinguish between real and fake content. So there are some initiatives, but it seems like a heck of a task.
Jacob Bourne:
And the more advanced the generative AI becomes, the harder it's going to be to detect what's fake and what's real.
Marcus Johnson:
It's a problem becoming more and more of a thing as time goes on. Gartner predicting 30% of outbound marketing messages, 30% from large organizations, will be synthetically generated by AI in the next two years, and that four out of five enterprise marketers will have established content authenticity functions to protect against misinformation and another harmful fake material by 2027.
Let's move to the question of how responsible or irresponsible we see companies and marketers being, advertisers being, in terms of what they're telling the public regarding how much artificial intelligence is in upcoming or could be in upcoming products. Because Insider Intelligence senior director of marketing, retail, and tech briefings, Jeremy Goldman, notes that the Federal Trade Commission, the FTC, is advising advertisers of AI products not to make promises they can't keep, cautioning companies against making false or exaggerated claims about AI capabilities in their ads.
And so my question is, is there a way of measuring the level of AI in a product or service so that there's something to check against? And the comparison here I've got is similar to autonomous vehicles. So the Society of Automotive Engineers, SAE, defines six levels of driving automation ranging from zero, fully manual, to five, fully autonomous. These levels adopted by the US Department of Transportation. So zero is no automation and it's driver assisted, partial, conditional, high automation. And then full automation is at the other end of the spectrum. What we see as similar, or does it exist? Is there, or will there be a similar classification scale for AI so we can check these claims from folks in terms of how much AI is in their products against a measurable indicator?
Gadjo Sevilla:
I do not think one exists right now. And so I think transparency, that's one of the bigger issues surrounding generative AI. We've seen companies who employ this without spelling it out, getting into trouble, and later on saying, "Hey, it was an experiment. It wasn't meant to be a product." So I think the burden of proof, or finding ways to give similar to nutritional information when AI is used, falls on the marketers or the product managers, just to say, "This amount of information is generated," at least in the beginning until such a system as the one you're talking about for autonomous vehicles can be determined. I think it's happening in such a fast space, that right now, the standards are still lagging behind the innovation.
Jacob Bourne:
I think that even the autonomous vehicle system is pretty messy, and I think that generative AI is going to be even messier in terms of ranking it. And I think the reason for that is, you think about an autonomous vehicle, it has one objective, and that's to drive a car safely. With AI models, they're going to be all kind... There already are all kinds of products that function in different ways. And so assessing them with one level system is probably going to be impossible. And I think, so we're not going to see a one size fits all system. What we might see, for example, that search bots is ranking their level of accuracy. I think that might be one way that they can come up with some vetting a benchmark.
And I think it's, from a regulatory perspective, I think the important thing is just for companies just to be responsible about how they're marketing their products so that it really is on par with what the consumer is going to expect in terms of its use. And I think we already have examples. Generative Ai is in its nascent stages. So far we already have examples of that not happening. And one recent one is just a user trying to perform an AI search, and the bot starts love bombing the user versus generating search results for a rake, for example. And I think that really shows how unpredictable generative AI can be. I think there's definitely things that can be done mitigates those kinds of extreme cases, but knowing that the FTC is really going to be targeting this very issue, I think companies really need to take it seriously.
Marcus Johnson:
Well, and Jeremy in his article was noting the FTC planning to create a new department, and also increase the number of technologists it employs with a focus on AI as well. But you made great points, gents. And Jacob, your point about it's just been incredibly messy. You've got zero to sentient, so being self-awareness, human-level intelligence, something that can pass a Turing test. And then you've got just everything between. It's not linear at all. And even autonomous cars, it's trying to be linear in terms of its progressions from step one to two to three, but with AI, it's all over the shop.
Final question here, gents, then. Eric Holtzclaw, co-founding partner and chief strategist at Liger Partners just wrote a piece in Inc saying he's been using ChatGPT for his business for a month, and it's already saved him a whole work week within just one month and nearly $8,000. How? Well, he points to a few ways. One, researching topics quickly. Two, adding meta descriptions to the pages of the websites they manage. And number three, making sure descriptions are accurate. So asking ChatGPT what it thinks a company product or service does, comparing the result with an evaluation of alternatives. If the description's off base, it points to a need to improve existing content. But Gadjo, I'll start with you. What to you is or some of the biggest things that AI like ChatGPT, so generative, can do for businesses today?
Gadjo Sevilla:
I think today, we can expect it to be a good assistive tool. For example, you can have a voice AI that listens in on meetings, takes down notes, and then proactively schedules agendas. In the case that you mentioned, a research bot that could sift through data to determine redundant sources and grade the quality of the content. So these are fairly low impact, repetitive tasks that can be done with larger data sets, and clearly could accelerate a lot of processes and save a lot of money.
Marcus Johnson:
Jacob, how about for you?
Jacob Bourne:
These are of course powerful tools with a lot of long-term potential. I think right now, we're seeing a lot of pressure on companies to adopt these tools, and I think the way companies might want to think about approaching it is really, how generative AI can serve as more creative inspiration, versus trying to think of using it as a shortcut. I think there's a risk of quality declining as a result of just purely looking at the time savings, versus looking at how can we really be more creative in our work using these tools for collaboration, versus as a replacement of certain functions? I think in the near term, probably one of the highest productivity gains I think we're going to see from generative AI is from co-completion tools that copilot, and to a certain extent, ChatGPT. I think that's really where the big money savings is going to be for companies.
Marcus Johnson:
Well, that's all we got time for for the lead. It's time now, of course, for the post-game report. So gents, a couple of takeaways from you. Gadjo, I'll start with you. A quick ten-second takeaway from the first half in our conversation about marketers and companies developing relationship with AI.
Gadjo Sevilla:
So I think with the race to be first and be ahead, there's always a danger of over-promising what AI can do. And you don't want to find out through some mishap that you are incorrect or maybe too eager to push the technology. So definitely needs guardrails. Now, can companies and marketers hold that thought, or are we just going to see just a race happening?
Marcus Johnson:
It's going to be a supreme balancing act.
Gadjo Sevilla:
Definitely.
Marcus Johnson:
Jacob, how about you?
Jacob Bourne:
I think the biggest thing that companies should be thinking about when they're adopting ChatGPT and other similar tools is that AI does not know or understand what the goal of your project is. If and when it gets to that point, then we really need oversight. So there's never going to be a time where we can just use these tools without human oversight. And so I think going forward, it's really about companies experimenting with how to get the most gain from these tools, while also considering what could be lost by adopting them, and really trying to mitigate those losses.
Marcus Johnson:
Well, that's all we've got time for this episode. Thank you so much, gents, for hanging out today. Thank you to Gadjo.
Gadjo Sevilla:
Thank you.
Marcus Johnson:
Thank you, of course, to Jacob.
Jacob Bourne:
Thank you, Marcus.
Marcus Johnson:
And thank you so much to Victoria who edits the show, James who copy edits it, and Stuart who runs the team. Thanks to everyone for listening in. We'll see you tomorrow, hopefully, for the Behind the Numbers daily and eMarketer podcast made possible by Meltwater, where we'll be talking all about the digital healthcare consumer.