- Emergency Consultation Services
- Risk Management Services
- Who We Are
- Our People
- What We Do
- Why We Are Different
- What’s New
- Where We Are
By Alexia R. Roney and Matthew F. Boyer
Previously, we introduced you to ChatGPT and the concept of an AI Chatbot application here. This week, we discuss the legal exposure that comes hand-in-hand with the internet – copyright infringement, 17 U.S.C. § 501, and the Digital Millenium Copyright Act (“DMCA”), 17 U.S.C. § 1201-1205. ChatGPT and its brethren applications are not immune to copyright concerns; to the contrary, they may exponentially increase liability exposure to their developers and users.
The key to understanding the potential for copyright infringement lies in how an AI Chatbot or similar application “learns” to respond to the user. In a very simplified manner, the OpenAI team trained ChatGPT on reams of human-created written data, which is scraped in part from Common Crawl, WebText2, Wikipedia, and other internet sources. These are copyright free sources, but what if the training data includes copyrighted material? A newly filed class-action lawsuit involving a separate AI application, Stable Diffusion, is addressing that exact question: Sarah Anderson, et. al. v. Stability AI LTD, et. al., U.S. District Court, Northern District of California, Civil Action No. 3:23-cv-00201 (filed on January 13, 2023). The link to the case is here.
Stable Diffusion is like ChatGPT, but for images. Give the AI application a text prompt, and it will generate an image in response rather than a written response. To train Stable Diffusion, its developers scraped images from across the internet to the tune of 100 terabytes of images. The database of images used is public, allowing the plaintiff artists in Sarah Anderson et. al. to confirm that their specific copyrighted art had been scraped and used as part of training Stable Diffusion. The Plaintiff artists contend that they never licensed or permitted Stability AI, Inc., or Stability AI, Ltd, the developers of Stable Diffusion, to use their art.
The plaintiff artists, however, do not only allege that their art was wrongfully used; they further allege that Stable Diffusion uses their art to create new images. The images produced by Stable Diffusion are alleged to be a collage or amalgamation of images or artwork created by humans, including copyrighted material. Thus, according to the complaint, not only has Stable Diffusion violated the plaintiff artists’ copyright, but so have several tech companies who have built software that uses Stable Diffusion for profit. The defendants allegedly violated copyright and enabled those using their applications to produce faked artwork to violate copyright. The plaintiff artists thus assert claims of direct copyright infringement, 17 U.S.C. §§ 106, et. seq., vicarious copyright infringement, 17 U.S.C. §§106, et. seq., violation of the DMCA, 17 U.S.C. §§1201-1205, and several other state and federal claims. The plaintiff artists seek statutory and other damages, attorneys’ fees, and injunctive relief.
If the court in Sarah Anderson, et. al. adopts the plaintiff artists’ theory, every time a user gets an image for their business or brand from Stable Diffusion or another application that uses Stable Diffusion, the user might face an accusation of copyright infringement. Imagine it: every social media influencer who tries to drive more traffic to their own account by posting an AI created image might be accused of a copyright violation each time they do so. (Click here.) Similarly, any business owner utilizing an AI art generator – as an alternative to stock or custom art – to get the exact image they want for a display or advertisement may also be faced with an allegation of copyright infringement. And one image could be alleged to violate multiple copyrights based on a theory that the image is an amalgamation of a dozen (or more) copyrighted works. This is why using the AI generator is arguably different from simply scooping someone’s copyrighted image from a website. In light of these new issues and causes of action arising out AI created works, insurers need to be aware of the potential exposure to their insureds and coverages potentially afforded under their policies.
For more information on this topic, contact Alexia R. Roney, Matthew F. Boyer or your local FMG attorney.