The news: The European Commission is proposing a rule that would require AI companies to disclose any copyrighted material that was used to train a generative AI.
- The change is part of a broader AI Act, which contains rules assigning risk levels to AI products ranging from “limited” to “unacceptable” in cases like AI use for surveillance or spreading of misinformation.
Finally, some rules: Generative AI has been perhaps the most buzzy AI technology outside of chatbots in recent months. But while consumers tend to use it as a toy, legal ambiguity around how generative AI is trained have landed companies in trouble and stifled corporate use.
- While it may be fun to speculate what “Star Wars” would look like if it was directed by Wes Anderson, it’s not so simple. The material used to train the AI to understand what “Star Wars” and Anderson’s style look like could be copyrighted, and holders argue that it constitutes theft.
- Something like the above example could be argued off as artistic interpretation. But there are cases where it’s more black and white: Getty Images sued Stability (creators of Midjourney and Stable Diffusion) for allegedly training AI on its bank of copyrighted images, thus avoiding paying for a service through which Getty provides photos to AI companies.
- The US Copyright Office has issued recommendations for what users of generative AI are protected by fair use, but there’s still ambiguity and fear of legal action. That’s why, despite its clear cost-saving potential on creative advertising, brands and agencies have avoided using the tech for creative and client work.
Our take: The EU’s legal definitions could stifle generative AI growth but would also provide much-needed standards that would allow advertisers to experiment with the technology. What’s unclear is how exactly these rules will interact with upcoming generative AI advertising products from Meta and Google, who are pressing on with the tech despite a lack of legal clarity.