{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/60518a52f69aa815d2dba41c/6440674e4561b10011e384e5?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"Margot Kaminski on Regulating AI Risks","thumbnail_width":200,"thumbnail_height":200,"thumbnail_url":"https://open-images.acast.com/shows/60518a52f69aa815d2dba41c/show-cover.png?height=200","description":"<p>In the last few months we've seen an explosion of new AI products, especially those built around large language models. And in response, we've also heard calls for far more aggressive government regulation.&nbsp;But what does it mean to regulate AI?</p><p>Margot Kaminski is an Associate Professor of Law at University of Colorado Law School. She's just published a paper for <em>Laware</em>'s ongoing Digital Social Contract research paper series, in which she argues that the emerging law of artificial intelligence is converging around risk regulation. Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at <em>Lawfare</em>, spoke with Margot about what risk regulation means in the AI context and why she thinks that it has some serious drawbacks.</p>","author_name":"The Lawfare Institute"}