The Daily: How to prepare for AI, labels on smart devices, and Apple's lip-reading technology

On today's episode, we discuss the ways in which firms are prepared—and unprepared—for AI, what happens when companies have finished test-driving generative AI, and what to make of Meta giving away its AI model. "In Other News," we talk about when we can expect to see GPT-5 and how Apple’s lip-reading technology could be a step toward artificial general intelligence. Tune in to the discussion with our analysts Jacob Bourne and Gadjo Sevilla.

Subscribe to the “Behind the Numbers” podcast on Apple Podcasts, Spotify, Pandora, Stitcher, Podbean or wherever you listen to podcasts. Follow us on Instagram

Made possible by

Awin

Unlock unlimited opportunities that reach consumers everywhere with Awin's affiliate marketing platform. Choose which partners best match your marketing objectives. Control costs by defining how you pay them. And customize your program to mirror your unique goals. Consider Awin your full customer journey marketing partner to drive growth.

Episode Transcript:

Marcus Johnson:

This episode is made possible by Awin. Unlock unlimited marketing opportunities that reach consumers everywhere using Awin's affiliate partnerships platform. Choose which partners best match your marketing objectives, control your costs by defining how you pay, and customize your affiliate marketing program using Awin's tech to mirror your unique goals, whatever they may be. Visit awin.com/emarketer to learn more and get started today.

Gadjo Sevilla:

But really, there's no middleman that can build the right technologies for each and every business yet. So, that whole part will still take time to develop when I do see market opportunities for that kind of management solution.

Marcus Johnson:

Hey, gang, it's Thursday, August 17th. Gadjo, Jacob, and listeners, welcome to the Behind the Numbers Daily, an eMarketer podcast made possible by Awin. I'm Marcus. Today, I'm joined by two folks. Let's meet them. Both of these gentlemen work on our Connectivity and Tech Briefing. One of them is based on the left coast out of California. His name is Jacob Bourne, one of our analysts on the team.

Jacob Bourne:

Hey, Marcus. Hey, Gadjo.

Marcus Johnson:

Hello there. We're also joined by Gadjo Sevilla, who is one of our senior analysts for the Connectivity and Tech Briefing based in New York.

Gadjo Sevilla:

Hey, Marcus. Hey, Jacob. Happy to be back.

Marcus Johnson:

Hey! So, gents, actually, welcome to the last show we'll be recording from our old office at 11 Times Square.

Jacob Bourne:

That's right.

Marcus Johnson:

Yeah, this is the last show we'll be recording from here. We're moving to a new office downtown joining some of our sister companies in a new building, so yeah, this is the last recording from this space, which is kind of crazy to think about, given it was started four years ago, four or five years ago, in a broom closet, basically, just across the way. See? A bit of a moment here.

But I do have a fact of the day for you apart from that fact. What does wifi stand for? Well, nothing as it turns out. Wifi is popularly thought to mean wireless fidelity, like hi-fi being high fidelity, but the term wifi was coined as a result of an effort to find a catchier name for a newly invented wireless technology that until then had been referred to as IEEE 802.11b direct sequence.

Jacob Bourne:

Not a catchy term exactly.

Marcus Johnson:

It's not. It doesn't grab you.

Jacob Bourne:

It doesn't roll off the tongue.

Marcus Johnson:

So, the term wifi doesn't stand for anything. It was a name invented by brand consulting firm, Interbrand, on behalf of the Wi-Fi Alliance. So, that's what wifi stands for, nothing.

Today's real topic, we're talking about preparing for artificial intelligence.

Today's episode, first in the lead, we will cover preparing for AI. What's the best way to do that? Then, we move to In Other News, and the two stories we've got for you today are labels for your internet-connected devices and also Apple's lip-reading technology. We start, of course, with the lead, we're talking about artificial intelligence and how to best prepare for it. A few articles we wanted to discuss here. The first is a recent Economist article titled Your Employer is Probably Unprepared for Artificial Intelligence, which was posited by this piece.

"It's one thing to invent a technology. It's quite another to integrate it into society," this article was saying. It was noting that the tractor was invented in the 1800s but took close to 100 years before everyone was using them. Not everyone, but everyone who needs a tractor. Nancy Stokey of the University of Chicago, saying that "the diffusion of technology improvements is arguably as critical as innovation for long-run growth." The result, the article suggests, is a two-tier economy with firms that embrace technology pulling away from the competition.

Jacob, I'll start with you. This piece was saying about how folks, employers, in particular, are going to be unprepared for AI most likely. How in your opinion are firms at this point most unprepared for AI?

Jacob Bourne:

Well, I think there's a lengthy list of ways in which firms can be unprepared, but I think one way that people should watch out for is a lack of a strategic alignment. In other words, adoption isn't enough. In order for firms to truly get that net benefit from AI, there has to be a carefully crafted strategy for leveraging it in a way that aligns with the broader business strategies and core values of the company.

One trend we're seeing is that employees will start using these AI tools independently with or without their employer's knowledge, and it's actually given rise to a worldwide trend where many companies are banning ChatGPT and other similar AI tools. And this could end up being a missed opportunity, so I think a better approach would be implementing a well-crafted strategy that allows for that adoption for targeted use cases that again, aligns with the company's core values and overall business strategy, and that also incorporates robust security measures.

Marcus Johnson:

Gadjo, what jumps out to you when you think about how firms are unprepared for artificial intelligence?

Gadjo Sevilla:

Yeah, they need to adopt a multifaceted approach, so starting from the infrastructure level, personnel training, and also more importantly, they can't neglect human oversight. This could lead to poor decision-making, and we've seen some of these results where there are ethical issues involved that they just don't address. Also, AI, especially machine learning, relies heavily on the source data. If companies don't have organized, clean data, it's not going to be a good result for them because poor data quality will just lead to flawed AI models, and that could cause more problems and more expensive problems in the long run.

Marcus Johnson:

Yeah, garbage in, garbage out. One of the things that jumped out to me from the article that I thought was interesting was the idea that we've been hit with this wave of at least conversation about artificial intelligence and supposed adoption, but early adopters can skew our perceptions of technology's true popularity. What I mean by that is the article was saying that while folks do adopt technology faster than ever today, like social media app Threads, which went from zero to 100,000,000 users in a week, mass usage is another thing entirely, saying that despite all these inventions, real takeup has been slow. So, in 2020, they say less than 2% of U.S. companies used machine learning, despite all the headlines about it. Less than 7% of America's manufacturing sector use 3D printing, and only one in four business workflows are on the cloud, a number that's been stuck for five years.

So, there's this idea about preparing for AI. It may seem like folks are way behind the curve because of how much conversation is being had about it, but in reality, after an invention, it takes a real long time for things to get adopted even in today's world.

Two gents on our briefings team, Jeremy Goldman and Daniel Konstantinovic, were noting that according to a recent VentureBeat survey, a majority of organizations, 55%, are dabbling with generative AI, but less than one in five plan to increase spending on the technology, 18%. They write that there's a consensus that AI is a game-changing technology that will become crucial to many industries, but exactly how it will take shape and how companies should put money behind it is less clear. Gadjo, what happens when companies are done test-driving generative AI?

Gadjo Sevilla:

What we're seeing right now are they're buying a shotgun approach, trying different AI models. I think the quest to find the right solution to solve the right problems is, depending on the business, going to be a lengthy one and one that requires oversight. So, some companies might realize it's not for them after they've tested or made initial investments, while others may find niche applications that the AI could help accelerate their business. It really depends on their long-term goals and how much they're willing to invest in time and investment to sort of see it through. Because right now, there's a lot out there for sure, but really there's no middleman that can build the right technologies for each and every business yet. So, that whole part will still take time to develop, and I do see market opportunities for that kind of management solution.

Marcus Johnson:

Jacob, do you see this at the moment as folks using this technology to solve a problem they don't really know exists yet? Because it seems like everyone rushed to use generative AI without actually thinking through what do we want to use it to solve, and some folks, it seems like, have landed on pretty cool solutions to very specific problems, like there was one in this article, furniture retailer Wayfair recently launching a new tool that lets people upload photos of rooms, then uses AI to decorate the image with recommended furniture from Wayfair, recommended Wayfair products. So, that seems like they're tailoring the AI to their company's needs. Other folks maybe have just rushed into this without really thinking through the end result.

Jacob Bourne:

Yeah, there's a lot of pressure to adopt here, and, of course, I think generative AI is more obviously applicable to certain business use cases than others. So, some companies, in certain industries, have an easier time of it. But I think looking back in 10 years, I expect that we're not going to look back and think, "Oh, gee, adoption was really slow." I think this is ultimately going to be a really fast revolution.

We're in a transitional phase right now, no doubt, and it will take some time. The early adopters, the pioneers, are going to set the pace, which I think is ultimately going to be a very fast pace, and it's going to be about how to effectively use this technology while avoiding the pitfalls. It goes back to really adopting that or implementing that strategic alignment.

Now, the biggest limiting factor here is I think that there's a big upfront financial investment in terms of figuring this all out. You need people with AI expertise, and there's not a huge workforce right now that's just ready to go and ready to hire on that front. So, eventually, I think companies who are currently slower on the uptake are going to have to make those hires, but it's not going to come cheap. Right now, for example, Netflix, who's trying to stay competitive on AI, is currently hiring for an AI expert with a salary of 900,000. So, that gives you the-

Marcus Johnson:

I'll take it.

Jacob Bourne:

Yeah, exactly. Right. But it just shows how in demand these AI developers are and how there aren't that many of them, so it's going to be a big financial investment in order to really be at the forefront of this adoption race.

Marcus Johnson:

Yeah, I can see a kind of two-tiered approach forming or a two-tiered world developing because a lot of companies, arguably most companies aren't going to be able to fork out those kinds of salaries. So, to Gadjo, I feel like you were saying about building a very comprehensive and unified strategy across the business, it does seem like maybe companies are going to develop those individuals from within the company and ask them to spend more time learning about this as opposed to hiring from without.

Gadjo Sevilla:

That makes sense because then they can tailor for their needs, and they're upskilling employees who are already familiar with their products and services, who know what they need to focus on rather than just ingest all the AI data and sort of deal with it. They can make it work and tailor it to their needs, but, of course, that also takes time. But it's not going to be as expensive, I would imagine, as hiring a high-skilled AI professional.

Marcus Johnson:

There are a lot of ways folks are getting involved with generative AI. The most obvious one is ChatGPT, rolled out by OpenAI at the end of last year. Google has a product. Meta has a product, and there's been this debate about whether they should be closed or open in terms of the rollout of this technology. And it seems like Meta is zigging as other folks have zagged.

So, at the end of July, Meta made a pretty game-changing move as it's been described in the world of AI, according to some folks including Vox's Shirin Ghaffary. She was explaining that at the time when other leading AI companies, think Google or OpenAI, the creator of ChatGPT, are closely guarding their secret sauce when it comes to AI, Meta decided to give away for free the code that powers its innovative new AI large language model, Llama2. That stands for Large Language Model Meta AI or Llama. This means other companies can use Meta's Llama2 model, think ChatGPT, to build their own customized chatbots. Jacob, why is Meta giving away its AI model?

Jacob Bourne:

There's two main reasons. First, Meta's tried to kind of position itself as the good guy in the AI race, and the second thing is it's trying to actually advance its internal capabilities through crowdsourcing. When Meta open-sources this very powerful Llama2, it's pretty much sending the message that it's democratizing generative AI versus monopolizing, so it's like a bit of a PR strategy in a way for Meta. Meanwhile, once we have this open-source model out in the wild, AI developers from all over the world can build on top of it and improve it, which in turn could help Meta advance its own internal AI capabilities.

So, this might not be a strategy that Meta might employ forever, but it's currently trying to set itself apart, and the big ... There's a lot of pros and cons between open-source versus proprietary models, but one of the biggest risks here is that it could get into the wrong hands and bad actors could use it for malicious purposes. Right now, Meta has a policy in place for how it wants the Llama2 to be used, but enforcing that is another matter. Sending a nasty email only will do so much, and, of course, it could sue as well, but that's one area where we really don't know how that's going to play out.

Marcus Johnson:

Yeah, Gadjo, Miss Ghaffary was, in the piece, suggesting there's an important ethical debate over who should control AI and whether it can be made safe, and there's pros and cons here as Jacob was alluding to just now with regards to open-sourced, generative AI. The good, some leading experts think the AI models that are open-sourced could become smarter, less ethically flawed overall. They'll have more scrutiny because they'll be more transparent. The bad, Miss Ghaffary noting one of those examples in terms of ban actors that Jacob was referencing was that soon after Meta released its first Llama model strictly for research use in February, it leaked on the anything-goes online message board 4Chan, where it was then used to create chatbots that spewed hateful content like racial slurs and scenes of graphic violence. Where do you land on the discussion surrounding Meta releasing their AI as open-source?

Gadjo Sevilla:

Yeah, I think with the case of Meta, what they're looking for ... They're coming from behind, so what they're looking for here is adoption. That's going to drive, hopefully for them, make their language model more of a general candidate. More people will adopt it being open-source. They can learn from that as well from third-party developers, and definitely, there will be incidents where this will be misused. As we've seen in the past, Facebook, Meta, they're mostly reactive. They'll probably go back and say, "Well, we didn't intend for this to happen, but it happened. But the greater good is being served because here we are. We're giving our technology away for free."

Now, at some point, that could become a problem for them, but right now, I think that their focus really is just showing the world that they have a toolset that can be easily accessed and used. And again, like Jacob said, whether or not they'll continue, the open-source model remains to be seen, but I think for the foreseeable future, they're being looked at as the Linux or like the open-source option. Whether or not they progress from that point, it depends, but I think just in terms of, I guess, for their brand, that's what they're shooting for. And I think they're succeeding in that respect.

Marcus Johnson:

I liked this idea of thresholds that was in the piece. OpenAI or ChatGPT founder, Sam Altman, was even acknowledging the importance of allowing the open-source community to grow, suggesting setting some kind of limit so that when an AI model meets certain capability thresholds for performing specific tasks, it should be forced to get a license from the government.

All right, gents, that's it for the lead. Time now, of course, for the Half Time Report.

Gadjo, I'll start with you. As opposed to something that's most worth repeating from the first half, I'm instead going to ask you in this segment, how can companies best prepare for the AI wave?

Gadjo Sevilla:

I think they need to undertake a competitive analysis, look at the landscape, see their competition, what moves they're making, and at some point, hedge their bets and look at what areas they feel they can scale up on using AI the quickest and that small successes will lead to bigger successes. I think having the foresight to see where that would be going could help companies understand their own path.

Marcus Johnson:

Jacob?

Jacob Bourne:

Yeah, well, in addition to that, I'd say, well, first of all, getting AI adoption strategically aligned with a company's core values is crucial. Beyond that, I think the companies might want to consider investing in custom solutions, which can include proprietary AI models that are tailored to their specific needs. That could potentially provide better outcomes over the generic models that their competitors might be using because it uses their own data. Their own company internal data models can be fine-tuned towards their specific use cases. The downside to this, of course, that it's a bigger financial investment to do it that way, but that approach also helps bypass issues like IP data leaks, control issues, and security flaws.

Marcus Johnson:

Yeah, that's all we've got time for for the first half. Time for the second. Today, In Other News, placing labels on internet-connected devices and Apple's lip-reading technology.

Story one, "Many smart home devices will soon feature a label that helps folks figure out how secure these products actually are," writes Sam Sabin of Axios. She notes that the White House and the FCC, Federal Communications Commission, have just started the new U.S. cyber Trust Mark program, which will place a shield logo label on internet-connected devices that meet the U.S. government's cyber standards for Internet of Things, IoT products as early as next year. To get a stamp of approval, products will need to protect user's data, restrict access to the device's network to just the consumer, and be able to accept software updates among other things. But, Gadjo, the most interesting sentence in this article about internet-connected devices getting labels is what and why?

Gadjo Sevilla:

I think the most interesting sentence is if all goes as planned, cybersecurity safety labels could start appearing on products and websites late next year, so we're starting to see consolidation across smart home and cybersecurity product no matter system. I think this will cause a divide between lower-cost products that don't carry safety labels and push up the value of more premium smart devices that have done the due diligence to meet the security requirements. If marketed properly, I think consumers will benefit from knowing the products they're investing in have already been vetted by regulatory bodies, so this will have wider adoption even if they cost more.

Marcus Johnson:

Yeah, that's a great point. There's a quote in there that says, "Major manufacturers and retailers, including Amazon, Best Buy, Google, LG Electronics, Logitech, and Samsung pledged their support for the program. This matters because the program is voluntary." So, it could create that two-tiered system, Gadjo, that you just noted.

Story two, Jacob, in a recent article you wrote that Apple's lip-reading technology could be a step toward artificial general intelligence, AGI, so basically, AI that is on par with human intelligence. You explain that Apple may be designing devices that use sensors to read lips, facial expressions, and head and neck movements, potentially enabling Siri to help predict speech and respond to commands without a microphone. But Jacob, the most important sentence in your article about Apple's lip-reading technology is what and why?

Jacob Bourne:

Yeah, I think the interesting sentence here reiterates what you said a bit. AI that can interpret human behaviors through body language could be a step in that direction, that direction being artificial general intelligence or AGI. It's really important to stress here that this might not be Apple's intention at all behind this patent application. It's very speculative. However, I think that AI that can read human body language means AI that can understand people better, and from an AI safety perspective, I think that this could be a double-edged sword.

On the one hand, it could help endow AI with a kind of functional empathy that might help with alignment efforts, in other words, getting AI aligned with human interests more broadly. On the other hand, there is also concern that deeper insight into human behavior and cognition can also increase the risk of manipulation by AI as well.

Marcus Johnson:

That's all we've got time for this episode. Thank you so much, gents, for hanging out today. Thank you to Jacob.

Jacob Bourne:

Thank you, Marcus. Thanks, Gadjo.

Marcus Johnson:

Thank you, Gadjo.

Gadjo Sevilla:

Thanks, everybody.

Marcus Johnson:

And thank you to Victoria who edits the show, James who copy-edits it, Stuart who runs the team, and thanks to everyone for listening in. We'll see you tomorrow hopefully for the Behind the Numbers Weekly Listen, the eMarketer podcast made possible by Awin, that we will be recording downtown because goodbye from 11 Times Square.

"Behind the Numbers" Podcast