Pam Samuelson on Copyright's Threat to Generative AI

Pam Samuelson on Copyright's Threat to Generative AI

By The Lawfare Institute

The only thing more impressive than the performance of generative AI systems like GPT-4 and Stable Diffusion is the sheer volume of training data that went into these systems. GPT was reportedly trained on, essentially, the entire Internet, while Stable Diffusion and other image-generation models rely on hundred of millions if not billions of existing pieces of artwork. Of course, much of this content is copyrighted, and the authors and artists whose work is being used to train these models and, potentially, threaten their own livelihoods are paying attention. A number of high-profile lawsuits are making their way through the courts, and the outcome of these cases could hugely shape, and potentially even stop, progress in machine learning.

To explore these issues, Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, spoke with Pam Samuelson, the Richard M. Sherman Distinguished Professor of Law at the University of California at Berkeley and one of the pioneers in the study of digital copyright law. She's just published a new piece in the journal Science titled "Generative AI meets copyright,” in which she analyzes the current litigation around generative AI and where it might lead.

Support this show http://supporter.acast.com/lawfare.


Hosted on Acast. See acast.com/privacy for more information.

-
-
Heart UK
Mute/Un-mute