On today’s podcast episode, we discuss the impact of California not signing a controversial AI bill into law, what to watch for next in terms of state or federal AI rules, and how OpenAI is evolving after some high profile departures and a pivot towards a for-profit business model. Join host Marcus Johnson, along with analysts Jacob Bourne and Grace Harmon, for the conversation.
Subscribe to the “Behind the Numbers” podcast on Apple Podcasts, Spotify, Pandora, Stitcher, YouTube, Podbean or wherever you listen to podcasts. Follow us on Instagram
TikTok for Business is a global platform designed to give brands and marketers the solutions to be creative storytellers and meaningfully engage with the TikTok community. With solutions that can deliver seamlessly across every marketing touchpoint, TikTok for Business offers brands an opportunity for rich storytelling through a portfolio of full-screen video formats that appear natively within the user experience. Visit tiktok.com/business for more information.
Episode Transcript:
Marcus Johnson (00:00):
This eapisode is brought to you by TikTok for Business. Are you ready to rizz up your brand? Well, you're in luck. TikTok is where discovery drives outcomes, and three in four users say they will buy from a brand they've seen on their platform. Learn more at tiktok.com/business.
Grace Harmon (00:23):
Well, I think Newsom was saying that he vetoed the bill because it went too far, but also not far enough. This was a really divisive bill. This was something that ended up having a lot of really, really big companies on the other side of the table. So I do think there's a lot of politics at play here rather than just simply tech concerns.
Marcus Johnson (00:44):
Hey, gang. It's Monday. No, oh no. Thursday, October 3rd. Grace, Jacob, and listeners, welcome to the Behind the Numbers Daily, an eMarketer podcast made possible by TikTok. That's off to a flying start. I'm Marcus. I'm joined by two folks. Let's meet them. We start with a technology analyst based in California. We call him Jacob Bourne.
Jacob Bourne (01:08):
Hey, Marcus. Thanks for having me today.
Marcus Johnson (01:10):
Hey, fella. Of course. Thank you for being here. We also have with us our analyst who writes for our technology and AI briefings based in California as well. It's Grace Harmon.
Grace Harmon (01:22):
Hey, it's Grace here with you guys.
Marcus Johnson (01:22):
Hello. So today's fact is where we begin. This one I picked because you guys both live in California. So $100 ... You probably know this better than anyone. $100 is worth 26% less in California than in Arkansas.
Jacob Bourne (01:38):
Just 26% less?
Marcus Johnson (01:42):
That is indeed the right question, Jacob. So GOBankingRates looked at the purchasing power of $100 by state, factoring in things like income, food, utilities, taxing, housing, transportation. It was cited in a Visual Capitalist piece by Bruno Venditti. It found that California has the lowest purchasing power in the whole country at $88 while Arkansas has the highest at 113. So put another way, $100 in California would be worth $126 in Arkansas.
Jacob Bourne (02:13):
This is like the least surprising fun fact you've done, Marcus, I think.
Grace Harmon (02:18):
I wonder what your purchasing power would be at Whole Foods or Erewhon then.
Jacob Bourne (02:21):
Right, if you're going to some place actually that's already overpriced.
Marcus Johnson (02:25):
Yeah, low. It would be bad. I think this means that I'm moving to Arkansas as well, because also just three million people in the whole state, which that's my kind of leg room.
Jacob Bourne (02:37):
Right.
Marcus Johnson (02:37):
Time to move there, guys. Okay, so I've got the top five states with the highest purchasing power. Arkansas, Mississippi, Alabama, South Dakota, or Iowa. Any interest?
Jacob Bourne (02:48):
Always visit first, I say, before you make a big [inaudible 00:02:51].
Marcus Johnson (02:50):
Okay. Diplomatic. Okay, yeah, fair. Grace?
Grace Harmon (02:54):
I'd go to South Dakota.
Marcus Johnson (02:55):
Okay. Hello. There you go. Victoria, who edits the show, from New Jersey, your $100 gets you $91 worth of stuff in the States. The fifth worst. New York's right behind that. Fort Smith, Arkansas. I'm coming back. Good coffee shops. Shout out to Bricktown Brewery. Showed me a good time with my $100. I felt like a millionaire. That's generous. Anyway, today's real topic, why California just vetoed an AI safety law and what's next for OpenAI?
(03:28):
So, "California Governor Gavin Newsom has vetoed the AI safety bill that divided Silicon Valley and would have been the country's strictest AI safety law," writes Bobby Allyn of NPR, noting that California legislators had overwhelmingly passed the bill called SB 1047, which was seen as a potential blueprint for national AI legislation.
(03:49):
If passed, the new law would have required advanced AI models to undergo safety testing, made tech companies legally liable for harms caused by their AI models, required them to create a kill switch for the event that the AI was misused or went rogue, and also offered whistleblower protections for tech workers as well as some other things.
(04:09):
In a statement after vetoing the new law, Mr. Newsom said, "The bill focused too much on the biggest and most powerful AI models," saying, "Smaller upstarts could prove to be equally or even more dangerous than the models targeted by SB 1047 at the potential expense of curtailing in the innovation that fuels advancement in favor of the public good."
(04:29):
Grace, I'll come to you first because you wrote about this for us. Well, it had made its way to Gavin Newsom's desk, which was earlier in September. We were discussing it then. Is it true that smaller startups could be equally, if not more dangerous than the AI tech giants, which is something Gavin Newsom said? What was your take on this not making it into law?
Grace Harmon (04:50):
Well, I think Newsom was saying that he vetoed the bill because it went too far, but also not far enough, that it would've potentially really hindered innovation, but it also didn't apply broad enough restrictions on broad enough range of companies. If you're thinking about some of the companies that would be falling above the threshold of who this affects, OpenAI spent over 100 million to develop ChatGPT-4, Google Gemini spent ... I think it was about 191 million to develop their model. So there are a lot of very powerful, very wealthy companies that were falling on the other side of the bill.
(05:23):
In terms of smaller startups being equally, if not more dangerous, I don't think they're more dangerous. I think equally dangerous could be argued. But there's nothing inherent about not spending $100 million on a model that makes it more likely to be dangerous. I don't think that argument necessarily makes a lot of sense to me. Equally dangerous could make sense.
(05:42):
But the reason for the bill being shot down really is, from my perspective, a lot of heavy lobbying and then also just fears about industry impact. Newsom, he does have some presidential ambitions, so I think that you have to consider that.
(05:54):
This was a really divisive bill. This was something that ended up having a lot of really, really big companies on the other side of the table. So I do think there's a lot of politics at play here rather than just simply tech concerns.
Jacob Bourne (06:07):
Yeah, I mean I agree this is almost purely political. I think ... And Newsom's argument here is correct, but only in the most technical sense.
Grace Harmon (06:14):
Yes.
Jacob Bourne (06:14):
Yeah, you could have a small team, bootstrap startup, that makes a tiny tweak to a model architecture that has a huge unexpected effect and they make a major breakthrough. But really it's likely going to come from the major AI tech companies that have the money to pay the talent, the money to buy all the chips you need to make a frontier model.
(06:36):
And so, it begs the question, would you want to just not regulate anything because there's the unlikely chance that a small startup might make something powerful? And so, it doesn't make a whole lot of sense. And so, really it's a technical argument that's rooted in a political positioning.
Marcus Johnson (06:52):
Yeah, because on the politics side of it, part of the reason this was surprising was because it got through the necessary stages to become law with very strong support. So it passed the Senate 32 to 1 in May, and then the California State Assembly by 48 to 16 in August. And so, yeah, part of it does seem like there's politics and money at play.
Jacob Bourne (07:13):
It had strong public support too, at least in polling.
Marcus Johnson (07:17):
Yeah, there was one I found that said this one was designed by supporters and opponents of the bill. Californians backing the legislation by 54% to 28 after they heard arguments on both sides.
(07:30):
But, yeah, on the politics side, there's a Vox article pointing out that, "The $43 billion venture capital giant Andreessen Horowitz hired Newsom's close friend and Democratic operative Jason Kinney," it writes, "to lobby against the bill, and a number of powerful Democrats, including eight members of the US House from California and former Speaker Nancy Pelosi, urged a veto, echoing talking points from the tech industry."
(07:54):
Then Politico has our sister company reporting that Pelosi opposed the bill because she's trying to court tech VCs for her daughter, who's likely to run against Scott Wiener for a House of Representatives seat.
(08:04):
So there seems like there's a lot going on behind the scenes. And so, it's going to be really hard to know exactly why. Part of the concern here, though, Grace, you brought this up, which is Mr. Newsom was worried about driving 32 of the world's 50 leading AI firms out of California and their headquarters there to other states with less stringent AI regulation potentially.
Grace Harmon (08:24):
Well, this bill wasn't just going to make some of the companies that are based here not operate at all in California. For some of these companies that were incorporated here, that function out of here, that have their founders, their employees all here, I don't think that they were just going to completely get up and leave.
(08:41):
I think the one thing that could have been at risk is the functioning of these models in California. We didn't see this bill get passed, so I don't know exactly how that would've implemented it. But if you look to some of the regions where they have implemented stricter AI rules in the EU, how some companies have reacted to that is just deprecating features for those users.
(09:01):
So if Meta and Apple did that both in the EU, if this had passed, I think what you could have seen is a scaling back on the consumer side of those models being available here. Jacob, I don't know if you would have any opinion on that, but that was what I was thinking more about scaling back. It wouldn't just be that we would lose every single AI company. It's that the California consumers probably would lose some access.
Marcus Johnson (09:23):
Interesting.
Jacob Bourne (09:24):
Yeah. I mean California is a very expensive place to run a business, and there's a lot of regulation in general. So tech companies already have a lot of motivation to leave, but they don't because there's such a robust ecosystem around innovation and just a talent pool that's in California.
(09:41):
I think the bill's author, Scott Wiener, made a great point that, well, the bill would apply to any company doing business in California. It doesn't matter where they're located. So moving isn't going to help you if you still want to do business in a state that was one of the biggest economies in the world.
Marcus Johnson (10:02):
So quick pushback of folks saying that, oh, it should have been passed. Why wasn't it? The devil's advocate side of it is are people making too big of a deal out of this bill not making it through, not being signed by Gavin Newsom? The reason I say that is because in Mr. Newsom's defense, we talked about this before as well, he has passed 17 other AI laws in the past few weeks, one that cracks down on the spread of deepfakes during elections, another that protects actors against their likenesses being replicated by AI without their consent. So is it fair to kill Mr. Newsom over this one? Because he's passed quite a lot.
Jacob Bourne (10:38):
I mean I think the thing to really know about this bill is, unlike those other bills that were passed, this one is really targeting fears about an AI model that doesn't yet exist, but could exist very soon, and it's anybody's guess when it's going to be developed, the kind of AI model that could do broad sweeping, catastrophic damage in the billions of dollars to an entire economy, for example, or just really does harm society on a deeper level.
(11:06):
The tricky thing is, well, how do you effectively regulate something that doesn't exist yet? I think that's one of the arguments made against this bill. But then the flip side of that is, okay, do you want to wait till disaster strikes and then say, "Well, we should have done something, but we didn't"? So I think that's really the thing that separates this piece of legislation from other AI legislation.
Marcus Johnson (11:30):
Okay.
Grace Harmon (11:30):
Or wait till disaster strikes and then realize that you didn't have the infrastructure set up to know who to sue.
Marcus Johnson (11:36):
Right, right.
Jacob Bourne (11:36):
Yeah.
Marcus Johnson (11:37):
And so, sticking with legislation for a second here, Mr. Allyn of NPR noting that, "As billions of dollars pour into the development of AI and as it permeates more corners of everyday life, lawmakers of Washington still have not advanced a single piece of federal legislation to protect people from its potential harms, nor to provide oversight of its rapid development." So, Grace, I mean what should we be watching for next in terms of state/federal AI legislation in the country?
Grace Harmon (12:07):
Oh, I guess sticking with California, I do think that we're going to see another attempt. I was reading ... I think that Newsom's working with Fei-Fei Li, who is the "godmother of AI" to develop new legislation that might come into play next year, that might be a little less controversial, a little less divisive, might have more buy-in from different companies, from researchers. But I think that Newsom had noted that it would not be insane to have there just be a California-only legislation, just because the AI issues are so much more relevant here than in some other states.
Marcus Johnson (12:42):
Jacob, the same state for you, focus on California?
Jacob Bourne (12:44):
Yeah, I mean I think California will pass something. It makes sense. So many of the major companies are located in California. It's a state that tends to take bold regulatory action in general.
(12:55):
But I think, overall, what we're going to see is just a patchwork of laws at the state level. I'm not holding my breath and seeing congressional action anytime soon. What we might see at the federal level is more executive orders, maybe FTC investigations into particular companies. It depends on who's in the White House.
(13:16):
I think it is a hard thing to legislate because it's something technical and it's a newer technology, generative AI. But I certainly think that California is going to try to be at the forefront of passing something in terms of broad AI regulation. It's just going to be fairly scaled back from what the current SB 1047 was.
Marcus Johnson (13:36):
Well, let's turn our attention for the remainder of the episode to OpenAI. Karen Hao of The Atlantic writing that it was recently announced that Mira Murati, OpenAI's chief technology officer, OpenAI, the maker of ChatGPT. So their, "Chief technology officer and the most important leader at the company besides Mr. Altman," she says, "is departing along with two other crucial executives," Chief Research Officer Bob McGrew and VP of Research who was instrumental in launching ChatGPT and GPT-4o, Barret Zoph. So those folks are on their way out of OpenAI. Jacob, what'd you make of these resignations?
Jacob Bourne (14:19):
I mean I think it's pretty significant that these top leaders keep jumping ship, and it's been more than just ... This has been happening all year. I think a big underlying concern and impetus for these departures is that there's concern over OpenAI's approach to AI safety and a concern that it's prioritizing profits over safety, at the same time it's being very outspoken about its desire to build a super intelligence. And so, the two things aren't sitting right with some of the people that maybe had started earlier on in OpenAI's run, when it was still just a nonprofit and less profit-focused than it is now.
(15:03):
It's also just a concern around optics. OpenAI's had a lot of drama with Altman's firing and then rehiring. When one person goes, then other people start to question whether they should, too. There's also rising competition, so that means there's more opportunities to go elsewhere or even start your own company. There's still plenty of funding going into generative AI, so there's a lot of other opportunities.
Marcus Johnson (15:27):
It seems like a lot of people that have left have decided to do that, because Anthropic was founded by former members of OpenAI. Elon Musk who has xAI was a founding member of OpenAI. Ilya Sutskever has Safe Superintelligence, and he was a founding member of OpenAI. So it does seem like OpenAI is finding ways to have employees leave for one reason or another and then basically create competitors, which I guess maybe that's how a lot of spaces develop.
Jacob Bourne (15:27):
Right.
Grace Harmon (15:52):
Well, I think at the time when AI really got introduced broadly, it was OpenAI that really was the name that came to mind. As more and more companies are popping up, Jacob just said it, but there are more opportunities elsewhere. So it isn't as much now that if you are a tech expert who is incredibly skilled with AI, that OpenAI is the place to be. Maybe it still is, but there's also a lot more emerging companies.
Marcus Johnson (15:52):
Good point.
Grace Harmon (16:15):
Like you said, some of these people are starting their own companies that are doing quite well.
Jacob Bourne (16:18):
Yeah. But it is something to watch, though, because this fight over AI talent is fierce. It's so important in terms of being at the top of the sector, and there aren't really a whole lot of people with the right skills to develop these frontier models. So the fact that OpenAI has been losing so many is certainly significant.
Marcus Johnson (16:37):
Yeah. Of the 13 people who helped found it in 2015, only three are left. I thought The Economist had an interesting note here. They were saying that, "Sam Altman's failure to retain top executives ... " He's the CEO, founding member as well. His, "Failure to retain top executives may be a red flag. One longtime Silicon Valley observer," they said, "was saying that the sense of upheaval looks similar to that of Uber," the ride-hailing people, in the days when it was led by Travis Kalanick, saying, "Phenomenal product, rotten culture," which is not an image OpenAI's potential investors will savor.
(17:14):
We've touched on this already, but the profit/nonprofit side of OpenAI. Multiple news outlets, including Reuters and Bloomberg, reporting, "OpenAI is planning to turn away from its nonprofit origins and become a for-profit company by next year, potentially valued at $150 billion. That'll be double its last valuation. CEO Sam Altman would receive 7% equity in the new arrangement." Grace, what do you make of this announcement that OpenAI is playing to convert from nonprofit org to a for-profit one?
Grace Harmon (17:44):
Well, it would make OpenAI operate more like a typical startup and it could lose some governance control that becomes more susceptible to investor demands. It already dissolved its super alignment team. It also could become a bit more stringent. OpenAI has faced some copyright lawsuits for data scraping or for taking publishers' information to put into ChatGPT, and those lawsuits can be pretty costly. So if profit becomes more openly a main focus, then I could see some more guardrails being put in place to ensure that investors aren't losing their dollars to missteps.
(18:20):
But we're talking about one of the most popular and prominent and one of the most powerful GenAI chatbots that's out there in the market. So throwing more money into the mix openly could be risky, but I also think that, to a degree, OpenAI has internally been functioning for profit already.
Jacob Bourne (18:43):
Yeah, and I think that's the main point. It's been functioning that way. And so, I think having this dual nonprofit/for-profit structure has actually mostly just created friction for the company. So actually I think it makes sense to change it. If they do go the route of the public benefit corporation, that could make it so that investors have a bit less influence than they otherwise would.
(19:05):
Now the change is controversial at the same time because it started exclusively as a nonprofit, and then it added the for-profit arm subsequent to that. So I think for some, this seems like just another step in OpenAI selling out to profit over AI safety, especially when it started with a humanistic mission. So some affiliates of OpenAI are unhappy with that, including, I think, some of the people who are departing.
Grace Harmon (19:32):
Yeah, I agree with that. I mean I think the other word that I would bring up that you already did, Jacob, is optics, keeping that nonprofit title.
Jacob Bourne (19:39):
Right, yeah.
Marcus Johnson (19:39):
So, yeah, it seems as though, like you guys have been saying, they've been operating this way for a while. In 2019, OpenAI created the for-profit arm because costs were going up and up very, very quickly, and it's easier to raise money in venture capital than it is through charitable donations. The New York Times was saying, annually, it collects 3 billion in sales, but it spends double that. And so, it has to try to figure out a way to stem that tide.
(20:01):
But it does feel like more than that, doesn't it? I mean it feels like, you were saying, Jacob, that a lot of the investors aren't happy with this. It does feel like a fundamental mission shift because OpenAI was founded as a nonprofit with a mission to ensure that artificial general intelligence, AGI, would benefit all of humanity. That's when AI meets or exceeds human potential.
(20:21):
And so, is this not now a big departure from that by becoming majority for-profit? Because they are still going to have the nonprofit portion, but those ratios for profit to nonprofit have swung 80-20 in favor of non to 80-20 in favor of profit, or at least will by next year.
Jacob Bourne (20:38):
Yeah, I mean I think the original change from just a nonprofit to this hybrid nonprofit/for-profit in and of itself was a big change. I think the accusation there is that, well, really it's really just about the profit. Of course the reason is because if you're going to build an AGI, you need funding. How are you going to get that type of investment if you're operating as a nonprofit?
(21:01):
Again, this has led to friction. And so, why don't we just become a for-profit and just lose the nonprofit arm? Which I think makes sense, but you can see why the optics aren't great and some people aren't happy who had been invested in this idea that OpenAI was a nonprofit from the beginning.
Marcus Johnson (21:20):
Yeah. It feels like it's no coincidence that former OpenAI co-founder Ilya Sutskever has created an AI startup called Safe Superintelligence, whose mission is exactly, it sounds like, to do what OpenAI originally intended to do, and they're preparing to fill that void that OpenAI seems to be leaving behind.
Jacob Bourne (21:36):
Yeah.
Grace Harmon (21:36):
Mm-hmm.
Marcus Johnson (21:37):
All right, folks. That's where we have to leave today's episode. Thank you so much today for hanging out with me today. Thank you to Jacob.
Jacob Bourne (21:42):
Pleasure to be here.
Marcus Johnson (21:43):
Thank you to Grace.
Grace Harmon (21:44):
Thank you both. Nice talking with you.
Marcus Johnson (21:45):
Yes indeed. Thank you to Victoria who edits the show, Stuart who runs the team, Sophie who does our social media. Thanks to everyone for listening in.
(21:52):
Quick reminder that our transcripts for all of our episodes live in the show notes in case you ever need them. We hope to see you tomorrow, though, for the Behind the Numbers Weekly Listen as an eMarketer video podcast made possible by TikTok that you can watch on YouTube, Spotify, or simply listen to it the usual way.