Large Language Negligence

Large Language Negligence

By The Lawfare Institute

As large language models like ChatGPT play an increasingly important role in our society, there will no doubt be examples of them causing harm. Lawsuits have already been filed in cases where LLMs have made false statements about individuals, but what about run-of-the-mill negligence cases? What happens when an LLM provides faulty medical advice or causes extreme emotional distress?

A forthcoming symposium in the Journal of Free Speech Law tackles these questions, and Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, spoke with three of the symposium's contributors at the University of Arizona and the University of Florida: law professors Jane Bambauer and Derek Bambauer, and computer scientist Mihai Surdeanu. Jane's paper focuses on what it means for a LLM to breach its duty of care, and Derek and Mihai explore under what conditions the output of LLMs may be shielded from liability by that all-important Internet statute, Section 230.

Support this show http://supporter.acast.com/lawfare.


Hosted on Acast. See acast.com/privacy for more information.

-
-
Heart UK
Mute/Un-mute