AI will ‘supercharge’ creativity but exacerbate existing privacy concerns, says VML exec

Generative AI will “supercharge our creativity,” said VML chief innovation officer Brian Yamada. He believes the tech will improve marketers’ ability to tell stories, but it will also raise new privacy concerns.

VML is using AI both as a visible feature in ads and as a behind-the-scenes tool. The agency worked with the Washington Lottery to build a tool where consumers can envision themselves in different destinations if they won the lottery. Behind-the-scenes features include building headlines and segmenting audiences. The agency also worked with Virgin Voyages to create “Jen AI” in 2023, which used Jennifer Lopez’s voice and image to customize cruise invitations.

VML believes marketing may soon be by machines, for machines. In a future where AI agents can not only search products but also make purchases on a user’s behalf, marketers need to consider how they could influence those AI agents. Yamada calls this “M2M” or “machine-to-machine” marketing. An AI agent would search paper towels, for example, and select the optimal one based on what a user asked for in terms of price, size, and other variables. The M2M aspect would be a paper towel brand finding a way to insert itself as the optimal choice.

But both M2M marketing and contemporary uses of AI-fueled personalization require a lot of user information.

“Privacy is No. 1” when it comes to concerns about AI, said Yamada. AI could exacerbate existing consumer privacy concerns as Google deprecates third-party cookies and marketers rely on alternatives like first-party data and data clean rooms.

Yamada emphasized the need for contextual control over whom consumers share data with as well as transparency of both purpose and timeframe. Most people are uncomfortable sharing their personal contact info, location data, digital behaviors, demographics, and purchase data to help train AI models, according to a Q2 2023 study from Publishers Clearing House (PCH).

But that kind of data is necessary for personalized marketing via generative AI and, in the future, the use of AI agents, which could raise concerns among consumers and regulators.

Organizations face a different privacy risk with AI. Without proper parameters in place, employees risk giving away valuable information that AI models can “learn” and share with others. That means organizations need to be thoughtful about how they experiment with AI and what data they share.

  • Structured pilots and proofs of concept are the best approach to this experimentation, according to Yamada.
  • Yamada suggested limiting the time and depth of development so teams can evaluate if AI experiments drove the value expected.
  • If not, end the pilot and move on. Don’t continue using something that doesn’t work just for the sake of using AI.

This was originally featured in the eMarketer Daily newsletter. For more marketing insights, statistics, and trends, subscribe here.

"Behind the Numbers" Podcast