{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/61e878a1419a9b0013b27134/68bf931d0e15f0d455e09ff2?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"The State of AI Safety with Steven Adler","description":"<p>Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven’s views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.</p><p><strong>&nbsp;</strong></p><p>You can read Steven’s Substack here: <a href=\"https://stevenadler.substack.com/\" rel=\"noopener noreferrer\" target=\"_blank\">https://stevenadler.substack.com/</a></p><p><strong>&nbsp;</strong></p><p>Thanks to Leo Wu for research assistance!</p>","author_name":"Lawfare & University of Texas Law School"}