Share

cover art for Scaling Laws

Scaling Laws


Latest episode

  • Why AI Needs Independent Auditors, with Miles Brundage

    53:06|
    Alan Rozenshtein, research director at Lawfare, spoke with Miles Brundage, founding executive director of the AI Verification and Evaluation Research Institute (AVERI) and former senior advisor for AGI readiness at OpenAI, about the state of AI safety and accountability and AVERI's vision for independent third-party auditing of frontier AI companies.The conversation covered the weaknesses of current AI regulations, including California's SB 53 and New York's RAISE Act; why Brundage left OpenAI to build an independent nonprofit; AVERI's case for shifting the unit of analysis from individual AI models to the organizations that build them; the "Volkswagen problem" of deception-proofing safety evaluations; a framework of AI Assurance Levels ranging from baseline transparency to treaty-grade verification; the limitations of safety benchmarks and the BenchRisk project's findings; market-based mechanisms for driving audit adoption, including insurance, procurement, and investor pressure; and how AVERI navigates the tension between proximity to industry and independence from it.Mentioned in this episode: Frontier AI Auditing: Toward Rigorous Third-Party Assessment of Safety and Security Practices at Leading AI Companies, Averi 2026Risk Management for Mitigating Benchmark Failure Modes: BenchRisk, NeurIPS 2025Why I'm Leaving OpenAI and What I'm Doing Next, Miles Brundage, Substack, October 2024

More episodes

View all episodes

  • Productivity Boom? Labor Shock? Google's Chief Economist on AI

    50:50|
    Fabien Curto Millet, Chief Economist at Google, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, and Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, to discuss the potential of AI to catalyze a productivity boom while also addressing labor market instability. The three dive into likely changes in AI capabilities as well as ongoing reasons for slow organizational adoption of AI. Finally, they close with a brief discussion of potential policy approaches. 
  • Abundance & AI? Nicholas Bagley Explains

    43:50|
    Nicholas Bagley, Professor of Law at Michigan Law, joins Kevin Frazier, Director of the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, for a live recording of the podcast in Ann Arbor. Thanks to Graham Hardig and Brinson Elliott for organizing a great event. Professors Bagley and Frazier start by analyzing a recent debate over housing policy before diving into the weeds of the Abundance Agenda, its nexus with AI policy, and what this all means for the future of legal education and governance.
  • How To Use, Govern, And Lead On AI? Rep. Begich Points The Path Forward

    46:07|
    Representative Nick Begich, Alaska's at-large member of Congress, joins Kevin Frazier, Director the the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, to discuss the current state of AI policy on the Hill. As one of the few members of Congress with a background in tech, Rep. Begich offers a unique perspective on this unique and evolving regulatory question. The two also assess how Alaska may be a leader in developing AI infrastructure. Finally, Rep. Begich shares how he and his staff leverage AI to improve their own operations. 
  • Should AI Laws Be Subject To A Higher Standard? The Right to Compute with Kendall Cotton

    39:31|
    Kendall Cotton, Founder and CEO of Montana’s Frontier Institute, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss Montana’s groundbreaking Right to Compute Act and how Montana hopes to protect access to AI and related technologies. We will discuss the history and reach of this Act and why other states may want to follow Montana's lead.
  • Why Data Governance Is the Key to AI Biosecurity, with Jassi Pannu and Doni Bloomfield

    49:56|
    Why Data Governance Is the Key to AI Biosecurity, with Jassi Pannu and Doni Bloomfield Alan Rozenshtein, research director at Lawfare, spoke with Jassi Pannu, assistant professor at the Johns Hopkins Bloomberg School of Public Health and senior scholar at the Johns Hopkins Center for Health Security, and Doni Bloomfield, associate professor of law at Fordham Law School, about their proposed framework for governing biological data to reduce AI-enabled biosecurity risks. The conversation covered the origins of the proposal in the 50th anniversary of the 1975 Asilomar conference on recombinant DNA; the distinction between general-purpose AI models and biology-specific foundation models like genomic language models; the biosecurity threats posed by AI, including uplift of novice actors and raising the ceiling of expert capabilities; the proposed biosecurity data levels (BDL 0-4) framework and how it draws on precedents from biosafety levels and genetic privacy regulation; the challenge of capabilities-based rather than pathogen-based data classification; the institutional and regulatory mechanisms for enforcement, including the role of NIH grant conditions and a proposed mandatory federal regime; international collaboration and the importance of U.S. leadership given that most high-tier data is generated domestically; the relationship between the proposal and open-source biological AI development; and the offense-defense imbalance in biosecurity and the case for mandatory gene synthesis screening. Mentioned in this episode:Jassi Pannu and Doni Bloomfield et al., "Biological data governance in an age of AI," Science (2026)Jassi Pannu, Doni Bloomfield, et al., "Dual-use capabilities of concern of biological AI models," PLOS Computational Biology (2025)Dario Amodei, "The Adolescence of Technology" (2026)The Genesis Mission Executive Order (November 2025)
  • Rapid Response Pod: Trump's New AI Framework with Helen Toner & Dean Ball

    25:23|
    On Friday, March 20, the Trump Administration announced a National Policy Framework for AI. White House officials have stressed that they want Congress to act on the framework's recommendations within the year. What this all means for AI policy is an open question that warrants calling in two of the smartest folks in the business: Helen Toner, Interim Executive Director at Georgetown's Center for Security and Emerging Technology (CSET), and Dean Ball, a senior fellow at the Foundation for American Innovation.  This rapid response episode cuts to the chase as everyone makes sense of this important development in the national AI policy conversation.