{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/60baafd7d3cdd0001b29d9ee/68c0a92c0806683f0a3ce64c?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"Scaling Laws: The State of AI Safety with Steven Adler","thumbnail_width":200,"thumbnail_height":200,"thumbnail_url":"https://open-images.acast.com/shows/60baafd7d3cdd0001b29d9ee/1757456640041-c1e3887d-f79c-4dfa-b5eb-75a8f51ef524.jpeg?height=200","description":"<p>Steven Adler, former OpenAI safety researcher, author of <a href=\"https://stevenadler.substack.com/\" rel=\"noopener noreferrer\" target=\"_blank\">Clear-Eyed AI</a> on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and Senior Fellow at <em>Lawfare</em>, to assess the current state of AI testing and evaluations. The two walk through Steven’s views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.</p><p>Thanks to Leo Wu for research assistance!</p><p>Find <em>Scaling Laws</em> on the <a href=\"https://www.lawfaremedia.org/podcasts-multimedia/podcast/scaling-laws\" rel=\"noopener noreferrer\" target=\"_blank\"><em>Lawfare</em> website</a>, and <a href=\"https://shows.acast.com/arbiters-of-truth\" rel=\"noopener noreferrer\" target=\"_blank\">subscribe</a> to never miss an episode.</p>","author_name":"The Lawfare Institute"}