The Daily: AI comes under fire, when we can expect to see GPT-5, and Netflix's game-controller app

On today's podcast episode, we discuss why the Federal Trade Commission is investigating ChatGPT-maker OpenAI; how publishers, content creators, and authors feel about generative AI; what the wrong kind of regulation looks like; and what AI rules we will likely see next. "In Other News," we talk about when we can expect to see GPT-5 and what to make of Netflix's newly launched game-controller app. Tune in to the discussion with our analysts Jacob Bourne and Gadjo Sevilla.

Subscribe to the “Behind the Numbers” podcast on Apple Podcasts, Spotify, Pandora, Stitcher, Podbean or wherever you listen to podcasts. Follow us on Instagram

Made possible by

Awin

Unlock unlimited opportunities that reach consumers everywhere with Awin's affiliate marketing platform. Choose which partners best match your marketing objectives. Control costs by defining how you pay them. And customize your program to mirror your unique goals. Consider Awin your full customer journey marketing partner to drive growth.

Episode Transcript:

Marcus Johnson:

This episode is made possible by Awin. Two thirds of digital ad spend currently flows to the three big tech platforms, Google, Meta, and Amazon. But their auction-based ad models favor their own bottom line and inflate costs at a time when every single marketing dollar counts. Awin's affiliate partnerships platform offers a real alternative to big tech, and puts you back in control of your ad spend. Want to find out how? Visit awin.com/emarketer to learn more.

Jacob Bourne:

I think we can completely rule out the notion that the government is going to be banning AI, it's not a reasonable concern. What is a reasonable concern, I think is the rapid pace of AI advancement and the rapid pace of adoption across industries. There's a lot of unknowns about the risks that this could pose.

Marcus Johnson:

Hey gang, it's Monday, August 21st. Jacob, Gadjo and listeners, welcome to Behind the Numbers Daily, an eMarketer podcast, made possible by Awin. I'm Marcus. Today I'm joined by two folks on the connectivity and tech briefing. One of them is our senior analyst based in New York. It's Gadjo Sevilla.

Gadjo Sevilla:

Hey, Marcus. Hey, Jacob.

Marcus Johnson:

Hey, fella. And we also have one of our analysts on that very team, but based on the other coast. Out of California, it's Jacob Bourne.

Jacob Bourne:

Hi, Marcus. Hi, Gadjo. Thanks for having me.

Marcus Johnson:

Hey, chaps. So, today's fact, Nintendo was established as a playing card business. So, it was founded in 1889 by for Fusajiro Yamauchi to produce handmade hanafuda. I might've said that wrong. I might've nailed it. Playing cards. And then, in the mid 1900s, the company licensed third-party card graphics, such as Disney characters, and then they expand into toys, and now they're the biggest, one of the biggest, video game companies in the world. Is the N64's Goldeneye, James Bond, the best video game ever? There's an argument. There's an argument. That's all I'm saying. All right, folks, today's real topic, artificial intelligence comes under fire.

So, in today's episode, first in the lead we'll cover ChatGPT and AI. Then, in other news, we'll discuss when we can expect to see the more advanced GPT-5 and what's next for Netflix's gaming ambitions. But we start, of course, with the lead. And we're talking about how AI has come under fire from various angles. And then, we'll talk a bit about some of the rules we can expect to see, rules that we don't expect to see as well. And so, we'll start with the FTC. So, Cecilia Kang and Cade Metz of the New York Times know that the FTC, Federal Trade Commission, has opened an investigation into ChatGPT maker OpenAI, according to the Washington Post, over whether the chatbot has harmed consumers through its collection of data and publication of false information on individuals.

FTC Chief Lina Khan said, "We've heard about reports where people's sensitive information is showing up in response to an inquiry from somebody else." The FTC's civil subpoena also cite a 2020 incident where the company disclosed a bug that let users see information about other users' chats and some payment-related information. So, Jacob, the FTC coming after ChatGPT, big, medium or little deal?

Jacob Bourne:

Yeah, I mean it's certainly significant. This is the first example of the US government going after an AI company. However, I think it's really important to note the FTC has had some recent setbacks in trying to exercise its authority over the tech industry. And so, just because it's going after OpenAI doesn't mean that anything will come of it. What I think is coming of it is it's not great for OpenAI's reputation, and I think it represents this just rising momentum of concern and complaints about AI companies in general.

Marcus Johnson:

Yeah, we've seen some kind of regulatory pressure kind of mounting, right?

Jacob Bourne:

Mm-hmm.

Marcus Johnson:

Because we saw this happen overseas. So, in March, Italy's data protection authority banned ChatGPT, saying that OpenAI had unlawfully collected personal data from folks, and didn't verify people's ages to protect minors. ChatGPT changed some stuff and was let back in. And then you've also had a push from different folks in the US back in March, the Center for AI and Digital Policy, an advocacy group pushing for the ethical use of technology. Asked the FTC to block OpenAI from releasing new commercial versions of ChatGPT, citing concerns involving bias, disinformation, and security as well. We've seen multiple open letters asking for something to be done.

I mean, Gadjo, what do you think? This does seem quite surprising the FTC is on top of this this quickly because they normally take a long time to get around to regulating, or to at least investigating big tech, and they're opening this investigation less than a year after OpenAI introduced ChatGPT. Where do you land, big, medium or little deal?

Gadjo Sevilla:

I think it's a big deal. And you're correct, sadly, most regulation is reactive rather than proactive. So, usually, something has to go wrong before it's given attention, which is usually too late. But there are a couple of factors to consider. A lot of these AI companies are based out of the US, so I think there's additional pressure for the US government to regulate said companies, at least to enforce transparency, to ensure that they're not crossing any lines. But that said, the government is divided on AI regulation, and this was apparent in June when the U.S.-E.U. Trade and Technology Council gathering, which took place in Sweden, showed that the US lacked a unified stance on what the EU's proposed rules were. So, it's still a little scrambled, although definitely there's pressure to show some signs of regulation, or at least that they're heading towards that direction.

Marcus Johnson:

Yeah. Let's talk about some other stakeholders who aren't terribly happy with the direction of artificial intelligence, in particular generative AI and OpenAI as well as others, Meta is also listed here. So, OpenAI is come under fire from publishers, content creators and authors being three particular stakeholders, saying basically, "Stop using our work." So, publishers, the example there, Adweek notes that the New York Times is updating its terms of service to stop AI scraping its content. The second example, content creators, Christopher Lance of the BBC notes that US comedian Sarah Silverman is suing ChatGPT maker, OpenAI, and Meta platforms, alleging her copyright has been infringed in the training of the firm's AI systems. And then, authors, Chloe Veltman of NPR writing that thousands of authors have signed a letter urging AI companies like OpenAI and Meta to stop using their work without their permission, or without compensating them.

Gadjo, I'll start with you. Which of these do you think is most interesting, publishers, content creators and authors, and how big of a deal do you think that is?

Gadjo Sevilla:

I think it's a big deal because the data sets from content creators are what drives generative AI to stay fresh and relevant. And for content creators, yeah, I could see them wanting to stop that. They see it as infringement. On the other hand, we have companies like Twitter and Reddit which curate user-generated content. And they recently put up paywalls on their API access in order to monetize AI scraping. Again, this is not content that they created, it's content their users created, right? So, there's another layer to it. And I think as long as all investments continue to pour into anything related to generative AI, publishers and content creators might want to see how they can get a piece of the AI pie.

Jacob Bourne:

Yeah, just to add to everything that Gadjo just said, I think each of these cases will be adjudicated in court, but I think we also need to look at the cumulative impact of it all. I mean, they're really starting to mount. This is on top of two class action lawsuits against Microsoft and OpenAI, one against Google for alleged violations of data handling and practice rules. So, in other words, it's not an isolated concern. It's becoming a widespread complaint that these companies are scraping data unlawfully, and then using it in potentially unlawful ways as well. That said, I think when we're looking at the New York Times, especially forming a coalition with other major media publishers to potentially file a lawsuit against OpenAI, I think that that is especially significant, because it's a very powerful coalition and we can expect that they'll be definitely represented by a very high-powered legal team that will get a lot of attention.

Marcus Johnson:

Yeah, it's going to be fascinating to see where we land, where we net out here. Because Patrick Gould, a reader in law at London City University, told the BBC it was likely that both cases, some of these cases against AI, would come down to whether training a large language model is a form fair use or not. And fair use is such a gray area. I'm sure it's not really, but it can be in terms of its interpretations. It can be quite difficult to know exactly what fair use means in different cases. Daniel Gervais though, a law professor at Vanderbilt, thinks that a market-negotiated agreement between rights holders and AI developers will arrive before any sort of legal standard. Matteo Wong of The Atlantic, noting that in the EU, artists and writers can opt out of having their work used to train AI, which could incentivize a deal that's in the interests of both artists and Silicon Valley. Any sense of which direction we're heading in, gents, and kind of a by when? I mean it's a tough question to ask, but is this a year out, five years out?

Jacob Bourne:

Yeah. I mean this could take a while to work itself out. I think we're looking at over a year to really get any kind of a sense of the direction it's heading in. But I would add that when we're thinking about fair use, it's not just the training, it's also the output. So, how the chatbots then spin the information that it's trained on is crucial. I mean, on the one hand it could spit out lies about individuals, which there are allegations that that has happened. On the other hand, spitting out information that's verbatim taken from news sites, for example, could be seen as an infringement. So, I think on both ends of the equation, there are issues.

Gadjo Sevilla:

Yeah, I wanted to add too, that AI being a paid service is going to accelerate the guardrails. These are no longer freeware applications. You do need to pay to get the latest version. So, guardrails will need to be set in place, not just to ensure that it isn't misused, but also to protect source data providers from liability.

Marcus Johnson:

Yeah. One group I want to return to is authors because there was a survey about what authors think about AI in a recent Authors Guild survey. And 90% of the writers said they should be compensated for the use of their work in training AI. The median income for a full-time writer in this NPR article, last year median income for a full-time writer was $23,000, 23 grand, and that's down over 40%. It's down over 40% 2009 to 2019. So, this is a big deal for a lot of people, not just, "Please don't use a bit of my work," but, "This is my livelihood."

Jacob Bourne:

Yeah. I mean, I think AI poses huge questions for just the future of work in general. And I think this, in particular for the authors, this kind of comes down in part to a technical question for AI developers. Can they figure out a way to build AIs that can flag this type of copyrighted material, so that they can then know who to compensate, for example?

Marcus Johnson:

Yeah. So, let's move to the final question here. And so, we turned to a recent Vox article by Divyansh Kaushik, associate director for emerging technology and national security at the Federation of American Scientists, and Matt Korda, senior research associate and project manager for the Nuclear Information Project at the Federation of American Scientists. So, they wrote a piece in Vox, suggesting that panic about over-hyped AI risk could lead to the wrong kind of regulation. They think that the proliferation of a sensationalist narrative surrounding AI, fueled by interest, ignorance, and opportunism, threatens to derail essential discussions on AI governance and responsible implementation. So, gents, with that in mind, what does the wrong kind of AI regulation look like? If we rush to do this and if people are paying too much attention to the headlines, the apocalyptic headlines as they're suggesting, what could they end up doing wrong when it comes to regulation?

Jacob Bourne:

Yeah, I mean, I think especially in the US where I anticipate regulation to be a bit weaker than in some other places, I think the worst kind of regulation is actually none or regulation that doesn't come soon enough. I think we can completely rule out the notion that the government is going to be banning AI. It's not a reasonable concern. What is a reasonable concern, I think is the rapid pace of AI advancement and the rapid pace of adoption across industries. There's a lot of unknowns about the risks that this could pose, including potential economic damage. One area to be aware of is that currently there's about 100,000 machine learning researchers worldwide, and that's compared to about 300 AI safety researchers worldwide.

Marcus Johnson:

Wow.

Jacob Bourne:

So, that's a huge, huge mismatch, and it's a sign of a problem. And I think lawmakers really need to, in addition to being able to craft policies that address near churn, median churn, and long-term risks, they also need to be funneling investment into AI safety research, not just AI advancement research.

Marcus Johnson:

Yeah. All right, gents, last question here. Let's move from what we don't expect to see or what we shouldn't see into what we should expect to or would like to see. So, I'll take one from you each. Gadjo, I'll start with you. What's one AI rule that you think we're most likely to see next?

Gadjo Sevilla:

I think we might see regulation around the quality of the results, possibly through enforcing better transparency of data sources. So, I think the onus will be on AI companies more or less, show how their system works and to show that they're above board. And by doing that, they would sort of need to walk regulators through a process, the sources and how they collect and use data. And I think at the very least, that could set the foundation for a better understanding of where regulation is headed.

Marcus Johnson:

I like this idea of licensing requirements. So, a Vox piece by Dylan Matthews noting that the FDA, food and drug administration, generally doesn't allow drugs on the market that have not been tested for safety and effectiveness, and that has largely not been the case for software. There's no governmental safety testing that's done. For example, before a new social media platform comes out, you don't need to do anything. So, he was suggesting maybe a similar agency could require pre-market approval before algorithms can be deployed in certain applications. So, I thought that was an interesting suggestion. This was part of a case made by an attorney called Andrew Tutt. What about for you, Jacob?

Jacob Bourne:

Yeah. Well, Marcus, I agree with the licensing is probably something we're going to see probably for the most advanced models versus more commonplace narrow models. But I think top on the wishlist here is requirements, especially for models used for sensitive use cases or for critical use cases, that those building them or deploying them be able to explain how the models work, where the output is coming from. Now, the problem with this is that the technical capability doesn't yet exist. I mean, for advanced neural networks, there's pretty much no researcher in the world who can really get a full explanation. So, I think while we wait for that technical problem, that black box problem to be solved, I think we're going to see a push for more accountability for, again, these sensitive and critical use cases. And I think what that will amount to is that for certain things there's going to be a requirement for human oversight. So, in other words, there's this push for automation, but there's certain things that you can't just fully automate because there's too much risk involved if the AI gets it wrong.

Marcus Johnson:

All right folks, well that's what we've got time for the first half. We're going to skip the halftime report and go straight to in other news. Today in other news, when can we expect to see GPT-5? And what's Netflix up to with gaming?

Story one. Jacob, you just wrote that OpenAI, the makers of ChatGPT, have just filed for a GPT-5 trademark. This, you suggest, hints at artificial general intelligence, AGI being on the horizon. AGI is the next step of generative AI when artificial intelligence can not just generate content, but also starts to reason like a person. Jacob, the most important sentence in your article about OpenAI filing for GPT-5 trademark is what and why?

Jacob Bourne:

I think a good sentence that puts things into context is from OpenAI's perspective, others aren't slowing their pace, so why should it? The context here is that OpenAI had been called out a few months ago in a petition letter led by Elon Musk, putting pressure on the industry, especially OpenAI, to not train models beyond GPT-4. Well, it turns out the letter didn't have much impact. Musk since has launched his own AI startup xAI, and plans to build a super intelligent AI to take on OpenAI. So, in other words, there was no moratorium, there likely will not be a moratorium. And OpenAI didn't sign the letter, I think because it knows it doesn't have to. And the reason why it doesn't have to is because nobody else is ceasing their model development. Musk is working on it, Google is working on it, Meta is working on it, others, Amazon. So, there's no real reason for any one of these players to not continue to build more advanced models because there's nothing requiring it and nobody's really following through on that.

Marcus Johnson:

Yeah, the pace of change I think is fascinating. In the piece you note that in March we learned that OpenAI expected GPT-5's training to be completed in December of this year, and that it thinks that it will have AGI capabilities, which you'd said to me. Yeah, earlier this year I was asking you about 4.5 and 5, and you'd given this very timeline.

Story two. Gadjo, you recently noted that Netflix just launched a game controller app on iOS and Android to serve as a cloud computing conduit. You explained that offering a game controller, albeit a virtual one, on mobile app stores indicates the Netflix could be looking to test and launch its gaming service soon. But Gadjo, the most important sentence in your piece on Netflix and gaming is, what and why?

Gadjo Sevilla:

So, I think for Netflix, the most important sentence there is Netflix needs content for engagement while the writers' and actors' strike is ongoing. So, pivoting into cloud gaming and game streaming is really a low-impact move for them because they might be able to sustain the interest of subscribers, at least a small subset of them who are also casual gamers, and this won't interfere at all with its core streaming business. So, I think they might've put this on hold for a while, but now that they're not producing anything new in the current state of the industry, gaming is something they could easily look to. Whether or not they're successful, it doesn't matter. They just need to get something started, and that's why I think they're approaching it through an app rather than what Stadia did for Google, which is they actually had hardware tied to it. So, either way, I think it's a win for Netflix. At least people still talk about Netflix and there's something to look forward to. And who knows? They might be able to attract some IP, some game studios to actually create something unique for streaming.

Marcus Johnson:

Yeah. Well, streaming in gaming, in a lot of things but particularly in gaming, is certainly the future. You had a chart in your piece, which jumped out to me, basically saying or asking when game development professionals were asked, "Which gaming platforms will grow the most by 2025?" Most people, 40% said streaming followed by mobile, 24% and then Metaverse and console further down the list. And secondly, there's money to be made. Revenue from cloud gaming's expected to cross the 4 billion mark in 2023. That'll be this year. That's up over 60% year-on-year according to Gambling Insider. Huge number, huge growth. And that's all we've got time for this episode. Thank you so much to my guests. Thank you to Jacob.

Jacob Bourne:

All right. Thank you, Marcus. Thank you, Gadjo. It's been a pleasure.

Marcus Johnson:

Yes indeed. Thank you to Gadjo.

Gadjo Sevilla:

Happy to be here, Marcus. Thanks again, Jacob. Talk to you soon.

Marcus Johnson:

Yes, indeed. Thank you, of course, to Victoria who edits the show, James, who copy edits it, and Stuart who runs the team. Thanks to everyone for listening in. We will see you tomorrow with the Behind the Numbers Daily, an eMarketer podcast, made possible by Awin.

"Behind the Numbers" Podcast