Behind the Numbers: Using AI at Work: Part 2—How companies are using AI and tips for employees on how best to use it

On today’s podcast episode, we discuss why people might become more worried about using AI at work, why they might become less worried, and how significant an impact artificial intelligence has had on jobs already. Join Senior Director of Podcasts and host Marcus Johnson, Senior Vice President Henry Powderly, and Senior Analyst Gadjo Sevilla, for the conversation. Listen everywhere and watch on YouTube and Spotify.

Subscribe to the “Behind the Numbers” podcast on Apple Podcasts, Spotify, Pandora, Stitcher, YouTube, Podbean or wherever you listen to podcasts. Follow us on Instagram.

Podcast Transcript:

Marcus Johnson (00:00):

In a rapidly changing market, speed to insight is everything. AI Search is the newest feature on EMARKETER PRO+. It helps you streamline research, delivering context-driven answers in seconds. No more endless searching. Just relevant insights to power your strategy. Stay ahead with AI Search. Why wouldn't you? Exclusively on PRO+. Learn more on our website, emarketer.com.

(00:28):

Hey, gang. It's Friday, April 4th. Henry, Gadjo and listeners, welcome to Behind the Numbers, an EMARKETER video podcast. I'm Marcus. And today we are discussing more Using AI at Work. This is a part two of our two-part episode. Part one was on Monday, if you want to go check that one out. For today's conversation, I'm joined again by the same folks we have with us, our senior analyst covering AI and all things technology. He lives in New York. It's Gadjo Sevilla.

Gadjo Sevilla (00:53):

Hey. Happy to be back.

Marcus Johnson (00:55):

Hey, fellow. And we also have our SVP of media content and strategy. He lives up in Maine. It's Henry Powderly.

Henry Powderly (01:00):

Hey, Marcus.

Marcus Johnson (01:01):

Hello, sir. Today's fact, we start there, how long would it take you to run around the world? Any guesses?

Henry Powderly (01:12):

It's not like a trick question?

Marcus Johnson (01:15):

I know. It should be, but someone actually did this. So Englishman, Kevin Carr, did it. Just amazing. It took him 20 months. That's a long time, Kevin. The greatest... And that's at a decent pace as well. It would take most humans most of their life basically to run around the world. What a trip though. The greatest distance run in 24 hours, close to 200 miles, which is what? Nine marathons? By Lithuanian Alexander Sorokin in August of 2021. He did the run in Poland. Dean Karnazes holds the unofficial record for the longest run without sleep, 350 miles, which he ran over three and a half days back in 2005.

Gadjo Sevilla (02:13):

Without sleep?

Marcus Johnson (02:13):

That can't be good for you.

Gadjo Sevilla (02:16):

No. No. No. No.

Marcus Johnson (02:18):

Dean. Who's... If someone close to me said, "Marcus, I'm thinking about running for three days straight," I'd be like, "Don't. Okay? I don't care about the records. Stop it." Oh, my goodness. Remarkable.

Henry Powderly (02:34):

Three days. Wow.

Marcus Johnson (02:35):

Yeah. I'm exhausted already. It's barely midday. Three days. Trying to stay awake is tough. You kept running. Well played, Dean. Anyway, today's real topic, Using AI at Work part two, how companies are using it and some tips on how best to use it in the workplace. All right, gents. For this one, let's start here. So Stuart, who runs the team, sent me this article and it was from Grace Harmon who writes for... She's a part of our tech team, basically and...

Gadjo Sevilla (03:07):

Yeah. She's an analyst for the AI tech briefing. And that was a great piece that she put together. Yeah.

Marcus Johnson (03:12):

Fantastic piece, right?

Gadjo Sevilla (03:14):

Yeah.

Marcus Johnson (03:14):

41% of C-suite executives said adopting gen AI is tearing their company apart and creating power struggles. This is what she wrote. This is according to Writer's 2025 Survey. It found 31% of employees admit to sabotaging their company's gen AI strategy with even higher shares of younger gen Z and millennial folks doing so. One in 10 workers are tampering with performance metrics to make it seem like AI is underperforming. "Why?" you may be asking. Grace explains that the main reasons for obstructing gen AI strategies included AI's risk of diminishing their value and creativity, 33%, fears about AI taking over their job, 28%, and a bigger workload, 24%. Gadjo, I'll start with you. Are we heading towards an employee AI backlash?

Gadjo Sevilla (04:03):

I mean, I wouldn't say it's-

Marcus Johnson (04:05):

Maybe.

Gadjo Sevilla (04:05):

I wouldn't say it's wide scale, but possibly in certain situations, certain industries, I could see that happening, especially, like we said in the previous episode, if AI implementation is done a bit carelessly, saying, "Oh, we need to adopt. We need to innovate." But at the same time, if employees are left to their own devices pretty much, like this story that Grace put up, it says 49% of executives said employees are left on their own to figure out gen AI, I can see that being a point of friction because all of a sudden over and above what they're doing, now, figuring out AI and how it makes the company better becomes something that they need to be taking care of as well. Right.

Marcus Johnson (05:03):

Yeah. Yeah. I mean, Henry, we talked about this bit in the last episode about how it seems like a lot of tech executives, the people building the AI, aren't trying to sugarcoat this. Matteo Wong of The Atlantic writing as early as 2016 OpenAI CEO, a leader, founder, co-founder, Sam Altman said, "As technology continues to eliminate traditional jobs, new economic models might be necessary, such as universal basic income, UBI." He warned repeatedly since then, "The AI will disrupt the labor market," telling Mr. Wong's colleague, Ross Anderson, of The Atlantic, 2003, "Jobs are definitely going to go away. Full stop." AI employee backlash, how likely?

Henry Powderly (05:47):

I mean, I think it's likely and clearly happening based on this survey. And I mean, I think it's the responsibility of leadership in this example to change the perception because if the C-suite executives in this survey feel like it's tearing their company apart, they're obviously making it a priority while at the same time not giving any resources out to their teams to figure this stuff out. So that's going to leave people feeling, a, that they're just trying to be forced out or eventually their role will be replaced by an AI. And that could be the case in some positions, but... And many that is not the case. I think a lot of this is driven by a need to get more efficient, more nimble, more creative.

(06:32):

I think over the past few years, companies have been asked to do a lot more with less. And one of the ways I look at gen AI is perhaps giving companies a chance to reclaim some of that mental load back for the thinking, for the strategizing, for the developing. And I think that if they could tell that story more clearly to their employees, there'd be a lot less backlash while at the same time empowering them with resources to be able to learn how to use all of these things.

Marcus Johnson (07:04):

Yes.

Gadjo Sevilla (07:05):

I agree with that. I mean, I think the narrative should be that it's a tool that can help augment but not replace your employees. And using AI for things like support or just doing the more menial, time-sucking tasks, that could make a big difference in an eight-hour workday, right?

Marcus Johnson (07:33):

Yeah. Yeah.

Gadjo Sevilla (07:34):

If applied properly again. Right?

Marcus Johnson (07:37):

Yeah. Saying, "We want to take this off your plate," and then saying, "So we can have you do this other stuff," as opposed to, "We want to take this off your plate," and people are looking around thinking, "Okay. Well, then what am I supposed to do?" There does seem like to be a chasm. There does seem like there is a chasm between how employees view AI adoption, how they think it's going at their company versus how the C-suite executives see things. There's a Writer's Survey we were citing them in the first episode on Monday. They found 70% of executives felt their company's approach to AI had been strategic, successful and that the business was AI literate. That number falls from 70 to closer to 40% for how employees view their business's AI adoption. So a 30 percentage point chasm between how companies think it's going, higher level people, and then the people on the ground.

(08:28):

And so Henry, it does feel as though the AI use at work, it could end up feeling a bit ad hoc. I was thinking this a few days before seeing these numbers actually. I was thinking, "Don't businesses need to outline a clear AI strategy?" and then two days later fund this Writer Survey and in it, they had specifically 90% of execs saying their company has an AI strategy. That number falls to 57% when employees were asked. So a big part of this has to be just confronting this thing head on. It's kind of like we were talking about in another episode about kids using AI for homework. It's one thing to.. You can't ignore it. You have to address it and say, "Okay. If you're going to use it, this is the right way to use it. This is the wrong way to use it," not just, "Let's hope that they don't use it. Let's ignore the elephant in the room."

Henry Powderly (09:17):

Yeah. I love that. And I also think that the other question is are they communicating what they're going to do as a company when they realize what they're trying to gain by using AI? If it's time savings, what are we going do with that 30%? I mean, I could think of three positions on my team I would love to hire for in order to grow and do different things, but we're all working under our own budgets and our own realities. And so I think that... Not to be idealistic, because of course one side of efficiency is unfortunately right-sizing a business, but on the other hand, it can be investing and building into business. And so I think the communication has to be as clear as possible.

Marcus Johnson (09:57):

Yeah. So let's talk about how to figure out... How people can figure out where AI and businesses and employees of those businesses can figure out where best use AI. Professor and senior fellow at the Stanford Institute for Human Centered AI, Erik Brynjolfsson, was saying there's always this difficulty of translating even the most amazing or maybe especially the most amazing technologies into productivity and business value, calling it the productivity J-curve because it sometimes gets even worse before it gets better. He was saying we saw it with electricity, the steam engine, early computers. We're seeing it now. The real challenge, the bottleneck, is figuring out how to identify business value. So Gadjo, I'll start with you. How do you identify business value when it comes to figuring out where to inject AI into the company?

Gadjo Sevilla (10:45):

I would try and match AI solutions to certain outcomes. So you're trying to solve problems, whether it's cost reduction, you could do that through automation maybe, cutting down on repetitive workflows, things like invoice processing, data encoding, taking that into consideration, or even revenue growth through AI driven personalization, dynamic pricing, chat bots for your sales and marketing. Clearly, you're trying to solve for specific problems and sometimes it's a situation where you could have any types of solutions to fit that, but then AI is just a convenient and measurable and available tool that can quickly show you that it's working. Right?

Marcus Johnson (11:44):

Yeah. Yeah. Henry, how about for you? Where do folks start when they're looking at this? Do they just write down on the whiteboard a list of all the problems that they're having and then figure out, "Okay. Let's prioritize them. Let's rank them. Start with number one and then go from there. How do we find the tool to fix the problem?"?

Henry Powderly (12:03):

I mean, I think that's one way, but when you're talking about business value, you're talking about money. And so I think that's what you need to look like. You look at it from two sides. What can you do that's making you money that gen AI is going to help you accelerate? And what is costing you money that generative AI can help you reduce that kind of expense? With the product side, I think it's what more can you make... By personalizing all of your messaging, do you get an X amount lift in conversion rate? And what does that translate into the cost per acquisition? I think there's a lot of those equations that need to be worked out, but I think that's where you start, because the bottom line is the bottom line where we're using these tools in order to run more efficient and more profitable businesses.

Marcus Johnson (12:51):

One thing that is... It's kind of a bit of a paradox because on the one hand, people are being told... Or maybe people are having a bit of cognitive dissonance or... People are being told, "Use AI. It's making things faster, more efficient, better, improving the quality," et cetera, but then they're being told, "Slow down. These tools aren't perfectly accurate. And you can't trust everything that you're getting from the answers from these things because they're hallucinating," which is when they make up answers when they can't find the actual one.

(13:27):

There was a new study from Columbia Journalism Reviews Tow Center for Digital Journalism, and it found serious accuracy issues with gen AI models used for news searches. This is an article from Benj Edwards of Ars Technica. He was explaining that the researchers tested eight AI driven search tools by providing direct excerpts from real news articles and asking the models to identify each article's original headline, publisher, publication date and URL. And they discovered that the AI models incorrectly cited sources in over 60% of these queries, raising significant concerns about the reliability in correctly attributing news content. Henry, I mean, what'd you make of this new study about AI model inaccuracies? And how do people get around this kind of industry-wide issue?

Henry Powderly (14:19):

Yeah. I mean, I'm not surprised that this is what they saw when looking at news specifically because when you're talking about news online, it is a much completely different ecosystem than informational queries, things that Wikipedia or content marketing sites are going to show. The news landscape is full of small players, scrapers. I mean, I think even the study cited how often that Yahoo News was the source, which was just aggregating the original source of news.

(14:51):

So I think the language models already have a challenge when it comes to disseminating the most authoritative news sources for these queries and at the same time, a lot of the top publishers that perhaps have the most trust and authority are blocking these crawlers in their robots.txt protocol. Even though the study did note that they did find some instances where they were going around it, I just think it's a really complex environment and I'm not surprised that the language models are struggling with it. And it's more problematic because the language model doesn't say, "I don't know," when it's confused. It makes up an answer. And I think that that was one of the problems that they noted in the study as well.

Marcus Johnson (15:34):

Why is that? Why can't these models just say, "I don't know."

Gadjo Sevilla (15:38):

I think they're just programmed to have answers and solutions and rather than saying, "No. I don't know that," or, "I can't act that," they'll give you something that's less than useful [inaudible 00:15:55].

Marcus Johnson (15:54):

I mean, that's people though, isn't it? I mean, I can't remember... I feel like in conversations with people, it's very rare that when you ask someone something, they say, "I don't know." They'll confidently just say an answer and then you run with it, or they'll talk around and answer and try to figure out in real-time. And these things are designed by people. So maybe it's just a reflection of society, is that people don't ever say to you, "I don't know."

Gadjo Sevilla (16:16):

Yeah. And you know what? Even voice assistants hit that wall. If you talk to Google or [inaudible 00:16:25] if they don't know, they'll just say, "I don't know, but I found this on the web." In other words, "Figure it out because this is all I have." But I guess the language models don't have that built into them, so they need to reason an answer. As with anything, the data readiness is the huge factor there. Are they using clean, structured data or just basically rehashed aggregated news, which in itself is problematic, right?

Marcus Johnson (17:01):

Yeah. Let's end the conversation with some tips for using AI at work. I'll start with two from Alex Fitzpatrick of Axios recently outlined five in an article. I'll give you two of them. Be specific. The more precise you can be with your request, the best at the outcome. And then number two, follow up. He says, "If your AI's first output is off the mark, try to follow up requests with instructions for improvement, and again, be specific." Two from him. Henry, I'll go to you first. What two would you offer? What two tips would you offer on how best to use AI at work?

Henry Powderly (17:40):

One of the things I've been really experimenting with, and it's been helping me a lot, is using audio as the interface. And that means recording myself. So somebody wants me to write them a proposal. Rather than just opening a blank page and starting to type away, what I'm starting to do now is just hit Record on an Apple Note and transcribe the whole thing and just talk for 30 minutes, talk out all of my ideas, and then give that transcript to something like Claude and querying and use that to try to come up with an ultimate proposal. I've been using that for longer form things. I've used it for writing a newsletter. I find that it is a huge time saver and it really just kind of takes that blank page syndrome out of the equation and just lets you kind of go. So that's tip number one. I think that's a really interesting one,.

Marcus Johnson (18:32):

And that's something that people say you should do when you're speaking to a person as well. They're like, "Get your ideas out there. Talk them out. Talk them through." So that seems [inaudible 00:18:40].

Henry Powderly (18:40):

Yeah. And not everybody can do that by typing. I mean, there's a bit more... It's more imposing to be staring at that blank page, but I found that just recording yourself is great. I work from home, so it makes it a little bit easier for me to do that than if I were sitting in an office surrounded by a lot of people, but-

Marcus Johnson (18:56):

Gadjo, don't even think about it. Yeah.

Henry Powderly (18:59):

And then my second tip is I've been using Claude styles a lot more. So if you use Claude, you can train it to write or respond in a way. You can give it past examples of your writing or past examples of reports or any kind of example that you want to emulate, and it does a really good job of helping you come up with a style standard so that once you... It's much easier to get work that comes out of it that really kind of feels like you, like the example I said with the audio recording. The first time I started feeding it into the AI and asked it for, "Give me a memo based on my transcript," it sounded very robotic and very not me, but by training it on some of the pieces that I'd written over time and really getting it to hone in on my voice, it does a lot better job now. So Claude styles have been real helpful too.

Marcus Johnson (19:54):

Very good. Gadjo, what do you have for us?

Gadjo Sevilla (19:57):

Yeah. So mine are more general, but I think can be applied to a lot of situations. So I think when piloting AI tools, you should start small. So just use it on one team, one department. That way you can... It's easier to measure whether what works, what doesn't, and then that can be replicated. And I know we've done this at EMARKETER as well. We have pilot projects which give us good feedback and most of the kinks are worked out when it's rolled out into larger groups. Also, I mean, find ways to measure success. I mean, AI can be so nebulous, but you really want to know how it's helping, what are the benefits. So I mean, you could use time saved perhaps for certain tasks. A big one would be error reduction. If you can manage to tailor AI so that it could help in those areas, then that's something you can bring back to your manager, your board and say, "Look, this is working. Let's do more of it." Right?

Marcus Johnson (21:10):

Yeah.

Gadjo Sevilla (21:11):

So those are mine.

Marcus Johnson (21:13):

Very nice. The two I'll note quickly, again, from Mr. Fitzpatrick's article, but I think are really, really good, one of them is, "Check its work." There is a disclaimer at the bottom of all of these models, especially the free ones, saying, "It's AI. It's a work in progress. But still they make stuff up. So spot check. Make sure you fact check all that stuff." And then secondly, I thought this was interesting. He says, "Be polite." And he was like, "No. Really. Researchers have found that using words like please and thank you improves AI chat bot's performance." So yeah, another good tip there. That's what we have time for today's episode. Thank you so, so much to my two guests for hanging out with me at the start of the week, the end of the week. Thank you to Gadjo.

Gadjo Sevilla (21:59):

Thanks again.

Marcus Johnson (22:00):

Yes, sir. Thank you to Henry.

Henry Powderly (22:01):

Thank you.

Marcus Johnson (22:03):

Absolutely. Thanks to the whole editing crew, Victoria, John and Danny, not Lance, because I asked him to help me with my new camera and he ghosted me for a week. Unbelievable. Thanks to Stuart though, who runs the team, and Sophie, who does our social media. Thanks to [inaudible 00:22:16] story. Thanks to everyone for listening in to Behind the Numbers Show, that's not true, an EMARKETER podcast, a EMARKETER video podcast at that. We'll see you again on Monday, hopefully. Happiest of weekends.