On today's podcast episode, we discuss the good, the bad, and what's missing from President Biden's new AI safety executive order. "In Other News," we talk about the potential of Microsoft's AI Copilot and Elon Musk's new AI chatbot called Grok. Tune in to the discussion with our analysts Jacob Bourne and Gadjo Sevilla.
Subscribe to the “Behind the Numbers” podcast on Apple Podcasts, Spotify, Pandora, Stitcher, Podbean or wherever you listen to podcasts. Follow us on Instagram
Episode Transcript:
Intro:
Stay on top of the biggest trends in digital marketing, advertising, and media with e-Marketer Daily. This industry leading daily newsletter is essential for decision makers who need to keep up with the latest news and analysis on the digital transformation of marketing. Packed with charts, facts and actionable insights, e-Marketer daily delivers unparalleled intelligence to give strategic leaders an edge. Visit insiderintelligence.com/eDaily and sign up today.
Intro:
So far, it's been a completely market driven sector in the US and I think the areas of that order where there are some emissions are reflective of the fact that there's still a desire to have it very much be market driven to keep the US at the forefront of this technology.
Marcus Johnson:
Hey gang, it's Thursday, November 16th, Gadjo, Jacob, and listeners, welcome to the Behind the Numbers daily, an e-Marketer podcast. I'm Marcus. Today, I'm joined by two people. Let's meet them. We start with one of our analysts on the connectivity and tech briefing base on the left side of America. It's Jacob Bourne.
Jacob Bourne:
Hey Marcus, thanks for having me today.
Marcus Johnson:
Hey fella, of course. And we're also joined by one of our senior analysts on that very connectivity and tech briefing. He's based on the right hand side of the country and his name is Gadjo Sevilla.
Gadjo Sevilla:
Hey Marcus, happy to be back.
Marcus Johnson:
Hello, gents. Hello. So today's fact is how many senses do we have? Any guesses, gents?
Jacob Bourne:
Well, five officially, I guess.
Marcus Johnson:
I was going to say, if you say anything less than five, we have a problem.
Jacob Bourne:
Right? But certainly there's talk of the sixth sense and who knows, there might be others as well.
Marcus Johnson:
Yeah, if you basically can't... As long as you don't say below five, you can't really be wrong because different people interpret them differently. And so nine seems to be a pretty well established number according to a lot of folks. But in particular, Johns Hopkins University Press an article by John M Henshaw, professor and chair of the Department of Medical Engineering at the University of Tulsa saying obviously sight, hearing, taste, smell, touch, the ones we all know, and then a few others.
So thermoception is the sense of heat on your skin. Equilibriuoception, our sense of balance, might have guessed. Nociception, the perception of pain from skin joints and organs. Not from the brain because the brain actually can't feel pain, which still blows my mind, but it's somehow true. And proprioception, which is body awareness.
So a lot of folks say nine, others argue that hunger is a sense or the sense of depth should count who you ask. I think we should count on that feeling when you pick a line to join and then almost immediately know you've picked the slow one. Yeah, that should count.
Jacob Bourne:
That's a frequent sense that I exercise for sure.
Marcus Johnson:
You guys know, everyone listening knows what I'm talking about. If you don't know what I'm talking about, everyone who does know what I'm talking about hates you. Anyway, today's real topic, president Biden's AI safety executive order: the good, the bad, and the not there.
In today's episode, first in the lead, we'll cover what was in President Biden's AI executive order. Then for another news, we'll discuss the impact Microsoft's AI co-pilot will have and what's make of Elon Musk's new AI chat bot called Grok.
We start with a lead and let's tuck into this gen. So [inaudible 00:03:36] McCallum and Zoe Kleinman of the BBC writing that the White House has announced what it is calling the most significant actions ever taken by any government to advance the field of AI safety, which may or may not be fair given what other countries are doing in the field of AI safety. We'll come to what other countries are doing in a moment, but there's a 100 page executive order from President Joe Biden on AI safety, and it includes a lot, but some of the highlights requiring the biggest AI developers to share safety tests results with the federal government before releasing models to the public.
A second one here, protecting consumer privacy by creating agency guidelines to evaluate AI privacy techniques. And then a third, helping to stop AI algorithms discriminate and create best practices on AI's role in the justice system. There are plenty of other measures in this short story of an executive order. Jacob, I'll start with you. What jumped out to you the most in terms of measures in this AI safety executive order and why?
Jacob Bourne:
I mean, so the order is pretty broad. It covers a lot of ground, maybe light on some specifics in certain areas, but where it does get somewhat specific is that it calls out the healthcare and educational sectors specifically.
In healthcare, we know the industry has been a bullish adopter of AI at the same time because of the sensitive use case, it's a risky venture in some ways. On the education front, students of course, again, also have been both adopters of AI and there's concern of course, how will they learn if they're using chat GPT to get their work done?
So I think what we'll probably see as generative AI proliferates more across industries is that there'll be more industries will come up where there's more concerns about potential negative outcomes, and then we'll see intervention around that or proposed intervention. But I think the takeaway here is in order to really address some of these difficult risks for these use cases, you really need a broad coalition. You need the government, you need AI companies, but you also need representatives from these sectors themselves to be at the table.
Marcus Johnson:
Gadjo, what jumped out to you?
Gadjo Sevilla:
So what reached out to me was the fact that he's given actual timelines and there's some accountability. So agencies like the Federal Trade Commission, they have between 90 and 240 days to review and implement changes, and they're seeing that they're going to form a White House AI council to oversee federal AI activities. So it shows that there are tighter timelines, as well as accountability, which I think should help prioritize regulation. So it's not just a catchall, it's being subdivided into agencies.
Plus there's also the need to involve, like Jacob said, the industry itself. And given that most of the AI companies are based in the US, at least the more successful ones, I think they have a good chance of at least establishing this. Now, if you're considering competition among these companies, I think that's when it's going to get a little bit dicey because having to share information on the next model that's coming out or what innovation, we're already seeing it now. They're very secretive, and I just think there's a limit to the extent of the involvement that you can expect from the companies themselves.
Jacob Bourne:
And they seem to be getting more secretive, not less secretive too, so that's the other thing.
Marcus Johnson:
So I want to zoom in on two things. You mentioned how this executive order ask different parts of the government to get to work on certain things. One of them is standardized AI tests, the executive order instructing the National Institute of Standards and Technology to come up with standardized tests to measure the performance and safety of AI models. Companies were designing their own tests, right? They've been grading their own homework, so to speak. Dipa Shivaram of NPR saying technology companies currently do their own red teaming is called, I believe, of products; subjecting them to tests to find potential problems like disinformation or racism. But they'd have to actually complete a test that was written by the government.
The other thing here as well is companies must label AI. So the order directs the commerce department to come up with guidance for watermarking AI generated content, which is interesting because labeling isn't always that clear cut.
And Scott Rosenberg of Axios was pointing out that legislators, regulators, and ethicists are requiring labeling for AI created work. But as AI use becomes more of a human machine collaboration, labeling will lose its coherence and meaning. Members of Congress have introduced an AI labeling act that requires clear and conspicuous disclosure of AI generated content across all media types.
But you're saying what happens if it's a human AI collaboration? How do you label that, right? Because there's going to be different degrees to which a human was involved. You could be involved 1% and the machine involved 99% or vice versa. The machine was involved 1% and you still would have to label it. I wonder if there's going to be different gradings in terms of the labeling to the level of involvement of the AI. Pointing out that every time you use any kind of auto complete, a modern spell check or a grammar check, you are using AI already. What's missing from this piece, from this executive order? What do you guys think? Gadjo, I'll start with you. What's the most glaring emission for this AI safety executive order?
Gadjo Sevilla:
I think just more details on the body. That's going to help regulate this from top. I mean, they've mentioned agencies, but you're going to need unbiased AI experts that are working for the government that can put together the standards and qualify the testing moving forward. Without that, really, there's very little you can do. So it's a highly specialized area. You're going to need experts that are really deep into the technology and where it's headed to be able to carry it off. Without that, I mean, it's just a good plan.
Marcus Johnson:
And the government's trying to hire more AI talent. They're pointing workers with AI expertise to AI.gov where they can find relevant openings for the federal government. Problem is these positions probably won't pay as well as the big tech players who are making the AI models.
Jacob Bourne:
And the other issue is, well, where are these talented workers going to come from? Of course, they're coming from the tech industry, and so they have a regulatory capture issue where you can actually get objective neutral decision making these workers.
Marcus Johnson:
Jacob, how about for you? Any glaring emissions?
Jacob Bourne:
Well, I don't know if I'd call any glaring emission. I don't know if I'd call this a glaring emission, but I would like to see more on chips. I mean, we need to see some domestic regulation of advanced AI chips. That's how you get these advanced models. You need the chips. Of course, the US is not shy about enacting heavy export controls to Russian other countries, but I think there needs to be more, I guess, recognition of the fact that bad actors in the US could also get a hold of these ships and create models for nefarious purposes. And I think the regulation of the chip industry domestically will also just be in a really effective way and a much easier way of regulating the overall AI sector.
Marcus Johnson:
Yes, a great point. A few other things that seem to be missing. No requirements to register for a license in order to train large AI models. That was a bit controversial. Secondly, it doesn't try to curb the use of copyrighted data in training AI models. Something else the industry is trying to get its arms around. Another thing, companies don't have to disclose the size of their models and the methods used to train them. And then finally, [inaudible 00:11:30] of Emory University thinks the everyday risks with AI don't get enough of the focus in this executive order. Things like AI hiring, discrimination, things like that.
I mean, this wasn't supposed to be a completely exhaustive, overly comprehensive list of everything to do with regulating AI. A start, perhaps you could argue. And there's a fair amount in here, so there was of course going to be some stuff missing, but taking it in its entirety, gents, I mean, Jacob, how much does this executive order move the AI safety needle out of 10, in your opinion?
Jacob Bourne:
I'd say it's about a five. I mean, it's just the most we've seen from the US so far. So in that sense, it's a big deal. I mean, so far it's been about a completely market driven sector in the US and I think the areas of that order where there are some emissions are reflective of the fact that there's still a desire to have it very much be market driven to keep the US at the forefront of this technology. And the other thing about it is it's an executive order. It's not a congressional action. And so that limits the significance of it as well.
Marcus Johnson:
Good point. Yeah, I mean, Gadjo, we've been here before in 2019. Then President Trump issued his own AI executive order, which admittedly was a lot lot shorter. I think it was like six pages long, not a hundred like this one. But generative AI didn't exist the way it does today, so things moving really fast. It's hard to take, aim and shoot at something that's moving at this speed. What do you make of this AI safety executive order, and how much do you think it moves the needle out of 10?
Gadjo Sevilla:
I would give it a six out of 10 just because the accountability has been passed down from the White House to actual agencies who now have a timeline. So whether or not all of them succeed, there's still a pressing need to deliver on those promises on those targets. So there'll definitely be more movement than there was four months ago. So I think mid next year we should start to see some developments at least on the front.
Marcus Johnson:
Yeah, one thing a lot of folks wonder about with any executive order is will it have teeth? And Kevin Rus at the New York Times pointing out these requirements will be enforced through the Defense Production Act, which is a 1950 law that gives the president broad authority to make US companies support efforts deemed important for national security. So maybe it will pack a bit more of a punch than perhaps other executive orders.
One thing here though is that AI regulation, it's just not a priority for Americans. And particularly with an election coming up, you might not see the government focusing on it as much as maybe as it needs to be focused on, because among 15 priorities, regulating the use of AI ranked 11th for folks with only one in four, calling it a top priority, this was Axios' morning consult poll. Americans just had other things to worry about preventing the government shut down, reducing the federal deficit, fixing healthcare, stimulating the economy. They all ranked 1, 2, 3, 4, and then plenty of other things people are worrying about more so than this. So maybe the government's not focusing on it as much because of that reason.
Let's end, gents, by talking about one of the measures in this AI safety executive order, which said that the government was to work with international partners to implement AI standards around the world. How far ahead are other countries on AI safety rules?
Gadjo Sevilla:
I think you have to look at China. Currently they lead in AI regulation. They've had rules outlined since 2017, and also everything runs through Beijing. So they have what they call the next generation artificial intelligence development plan. So that was established in 2017 long before any of the generative AI hit the market. But since Beijing proactively controls algorithms and therefore AI, I think that's one end of the spectrum where you have one government body with a unified mandate and full and total control over what and what does not go on to AI platforms.
Jacob Bourne:
Yeah, I mean there are a number of countries that I was saying are far ahead of the US, especially when you consider that the US is really leading commercial AI front, right? So yeah, so Gadjo mentioned China, the EU, the UK. There's others too, like Japan and Singapore that have rules in place.
I think that though we have to remember that other countries, even though they might be ahead, are far from having this technology figured out from a regulatory standpoint. The EU, with its AI Act, that could set a precedent globally, I think, for what a really robust legislation could look like. But unfortunately right now that has hit a wall. Negotiations right now have stalled for that because there's deep divides over how foundational models should be treated, such as potentially having stricter rules apply to more advanced models.
And so I think it underscores a big takeaway that this is really difficult and I think for... It doesn't matter what country it's in, I mean China maybe is a bit of an exception, but a lot of countries with a legislative process are going to struggle with some of the details. Yeah.
Marcus Johnson:
All right, gents, skip the halftime report that moves straight to the second half of the show today.
In other news, Microsoft rolls out its AI co-pilot to a lot more people. And what should we make of Elon Musk's new AI chat bot.
Story one. Gadjo you recently wrote about how Microsoft is updating Windows 10 to equip it with AI Copilot, vastly expanding Microsoft's generative AI reach. Currently 400 million Windows 11 users can access Copilot through a recent update. Now Windows 10 people will get an AI Copilot button on the task. Gadjo, what can Microsoft's AI copilot do exactly, and how much does this rollout to Windows 10 people move the needle for the company's generative AI efforts?
Gadjo Sevilla:
So I think this is really huge for Microsoft. They've been long wanting to get people off Windows 10 and into newer PCs and on Windows 11, but a lot of people haven't budged. The majority of users are still on Windows 10. There hasn't been a real reason for them to change. And they were hoping offering AI would be the case, but I think they've had a change of heart and now they're rolling back AI Copilot to Windows 10, which means they do have a captive audience of 1 billion global users who can now interact with their AI models, some of which are powered by open AI chat, GPT.
And I think from that they get a bigger user base as well as a larger data set from which to build their AI, especially since it's focused on productivity and for businesses. So for them, it's a no-brainer. They already have those users, if even a fraction of them adopt the technology, it's going to be a big step towards Microsoft owning that part of the consumer and business generative AI space, at least on devices, which is where the industry is going.
Marcus Johnson:
Story two. Jacob, you just wrote about X.AI founder Elon Musk, who recently introduced Grok, the company's new AI chatbot that is undergoing testing. Mr. Musk explains that the bot is inspired by the novel, the Hitchhiker's Guide to the Galaxy, and is a very early beta product with a sarcastic rebellious sense of humor. Great. You explained that with capabilities somewhere between open AIs GPT 3.5 and GPT-4 GR will soon be available to X premium subscribers. But Jacob, what'd you make of this new AI chatbot from X.AI?
Jacob Bourne:
It's hard to fully say because it's only available to a small number of users, at the spoil of testers at this point, really. But it's interesting here that what was promised by Musk earlier when he first founded the startup was that we would have an artificial general intelligence that would unlock the mysteries of the universe, and then we get a sarcastic bot as the first product from the company. They could very well still be working on the universe, understanding AGI, but it's a little bit anti-climactic to have a Grok as its first product.
I think what we can expect here is that there's going to be some strong reactions. I think some people really like Grok and the sarcasm and the edgy personality, and then other people that will be turned off by it. And I think it's kind of just reflective of the creator. It's Elon Musk himself who tends to be kind of a polarizing figure.
Marcus Johnson:
This is a great point, mate. I thought there was really interesting insight as well from [inaudible 00:20:27] of Axios who said, "The characterization of grok could add a political dimension to the AI market, with customers evaluating not just how accurate AI is, but also how much they like the politics of the answer."
Jacob Bourne:
Yeah, that's quite possible.
Marcus Johnson:
Yeah. That's all we've got time for for this episode. Thank you so much to my guest as always. Thank you to Jacob.
Jacob Bourne:
Thanks, Marcus. Thanks, Gadjo.
Marcus Johnson:
Thank you to Gadjo.
Gadjo Sevilla:
Marcus, Jacob, it's been a pleasure.
Marcus Johnson:
Yes, indeed. Thank you to Victoria who edits the show, James, who copyrights it, Stewart, who runs the team, and Sophie who does our social media. Thanks to everyone for listening in. We hope to see you tomorrow for the Behind the Numbers weekly listen and e-Marketer podcast.
First Published on Nov 16, 2023