The news: The 2024 US election aimed a spotlight on AI’s potential to confuse users with hallucinations and enable bad actors to spread disinformation.
Zooming out: Platform owners’ investments into election protection were significant.
A real threat: Research from Microsoft showed that Russia, China, and Iran accelerated cyber interference attempts shortly before the November US presidential election.
A greater problem came from manipulated deepfakes of political figures and the failure of content filters to identify misleading or false information.
What were the stakes? 46% of adults ages 18 to 29 use social media as their main source for political and election news, per the Pew Research Center. However, only 9% of people older than 16 are confident in their ability to spot a deepfake within their feeds, per Ofcom.
In some cases, it was the chatbot itself that generated false information, rather than a user prompting it to create misinformation. In September, xAI’s Grok chatbot briefly responded to election-related questions with incorrect information about ballot deadlines.
Our take: Now that the election has come and gone, it’s unclear if social media platforms will continue to place such a sharp focus on content moderation.
TikTok is already choosing to swap human moderators for automated systems and, if more platform owners cut safety teams to tackle AI development expenses, AI-generated misinformation could become a constant risk for users.
This article is part of EMARKETER’s client-only subscription Briefings—daily newsletters authored by industry analysts who are experts in marketing, advertising, media, and tech trends. To help you finish 2024 strong, and start 2025 off on the right foot, articles like this one—delivering the latest news and insights—are completely free through January 31, 2025. If you want to learn how to get insights like these delivered to your inbox every day, and get access to our data-driven forecasts, reports, and industry benchmarks, schedule a demo with our sales team.