The news: AI-generated content could account for as much as 90% of online information by 2026, per a study by Europol.
Why it’s worth watching: The deluge in synthetically generated AI content will create challenges in curbing disinformation, as well as accelerate opportunities for human-generated content.
Europol’s study was released last year, months before the surge in usage of generative AI tools like OpenAI’s ChatGPT and DALL-E, as well as Google Bard, Midjourney, and others.
The problem: The surge in AI-generated content has hindered the standardization of labels to differentiate information created with an AI assist from human-generated content.
The opportunity: The coming deluge in AI-generated content will put a premium on reliable human-crafted content.
Our take: Stricter content guardrails and proper labeling of AI-generated content should come hand in hand with industry adoption of new technologies—a challenge that falls on AI companies, government regulators, and content providers.
Dive deeper: For more on how generative AI is changing content, read our report on ChatGPT and Generative AI in the Creator Economy