Share

Scaling Laws
How To Use, Govern, And Lead On AI? Rep. Begich Points The Path Forward
•
Representative Nick Begich, Alaska's at-large member of Congress, joins Kevin Frazier, Director the the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, to discuss the current state of AI policy on the Hill. As one of the few members of Congress with a background in tech, Rep. Begich offers a unique perspective on this unique and evolving regulatory question. The two also assess how Alaska may be a leader in developing AI infrastructure. Finally, Rep. Begich shares how he and his staff leverage AI to improve their own operations.
More episodes
View all episodes

Facts & Myths About AI's Energy Usage with Gavin McCormick
49:31|In this episode of Scaling Laws, we explore how the "black box" of global greenhouse gas emissions is being cracked open by artificial intelligence and satellite imagery. Kevin Frazier, Director of the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, talks with Gavin McCormick, the founder of ClimateTrace, a global coalition that has revolutionized the process of identifying and quantifying emissions.For decades, climate policy has relied on self-reported data from nations and corporations—a system prone to gaps and "greenwashing." McCormick’s work leverages machine learning to monitor every major source of emissions on Earth in near real-time. We discuss the legal implications of "radical transparency," how AI-driven data can be used to enforce regulations and measure claims, and the myths and facts of AI’s environmental consequences. To get in touch with us, email scalinglaws@lawfaremedia.org.Logan Le-Jeffries, a member of the AI Wranglers student program at the University of Texas School of Law, provided research assistance with this episode.
AI as Abnormal Technology? Scott Sullivan Analyzes AI in the Military Domain
45:28|Scott Sullivan, professor of law at the U.S. Military Academy at West Point and a leading contributor to the Manual on International Law Applicable to Artificial Intelligence in Warfare, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to examine whether AI should be understood as a “normal” or “abnormal” technology.Drawing on his recent article, Sullivan argues that while AI may diffuse slowly and unevenly in civilian contexts, military AI operates under fundamentally different conditions—where strategic competition rewards speed, costs are often externalized, and meaningful oversight is limited by secrecy and epistemic uncertainty.The conversation explores how these dynamics challenge prevailing AI governance frameworks, what current military deployments reveal about the trajectory of AI adoption, and whether existing legal and policy tools are equipped to manage a domain where the pace of technological integration may outstrip the institutions designed to constrain it.
Lawfare Daily: Talking About Sam Altman with Ronan Farrow and Andrew Marantz
49:37|Senior Editor Kate Klonick interviews reporters Ronan Farrow and Andrew Marantz on their recent article in the New Yorker, titled “Sam Altman May Control Our Future—Can He Be Trusted?” In their 16,000-word piece, Farrow and Marantz create a cohesive narrative with receipts around Sam Altman, the products he's building at OpenAI, and how he's selling them not just to investors and the public, but also to regulators and world leaders.Klonick unpacks three key areas that are discussed in the piece: potential concerns of fraud, ongoing trust and safety and alignment issues at OpenAI, and the national security concerns that the article exposes in the "country plan" and Altman's entanglements in the Gulf. The discussion ends with a basic question: Are any of these legal issues enough to stop or correct the course of OpenAI, with its estimated $1T IPO in the coming weeks?
Why AI Needs Independent Auditors, with Miles Brundage
53:06|Alan Rozenshtein, research director at Lawfare, spoke with Miles Brundage, founding executive director of the AI Verification and Evaluation Research Institute (AVERI) and former senior advisor for AGI readiness at OpenAI, about the state of AI safety and accountability and AVERI's vision for independent third-party auditing of frontier AI companies.The conversation covered the weaknesses of current AI regulations, including California's SB 53 and New York's RAISE Act; why Brundage left OpenAI to build an independent nonprofit; AVERI's case for shifting the unit of analysis from individual AI models to the organizations that build them; the "Volkswagen problem" of deception-proofing safety evaluations; a framework of AI Assurance Levels ranging from baseline transparency to treaty-grade verification; the limitations of safety benchmarks and the BenchRisk project's findings; market-based mechanisms for driving audit adoption, including insurance, procurement, and investor pressure; and how AVERI navigates the tension between proximity to industry and independence from it.Mentioned in this episode: Frontier AI Auditing: Toward Rigorous Third-Party Assessment of Safety and Security Practices at Leading AI Companies, Averi 2026Risk Management for Mitigating Benchmark Failure Modes: BenchRisk, NeurIPS 2025Why I'm Leaving OpenAI and What I'm Doing Next, Miles Brundage, Substack, October 2024
Productivity Boom? Labor Shock? Google's Chief Economist on AI
50:50|Fabien Curto Millet, Chief Economist at Google, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, and Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, to discuss the potential of AI to catalyze a productivity boom while also addressing labor market instability. The three dive into likely changes in AI capabilities as well as ongoing reasons for slow organizational adoption of AI. Finally, they close with a brief discussion of potential policy approaches.
Abundance & AI? Nicholas Bagley Explains
43:50|Nicholas Bagley, Professor of Law at Michigan Law, joins Kevin Frazier, Director of the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, for a live recording of the podcast in Ann Arbor. Thanks to Graham Hardig and Brinson Elliott for organizing a great event. Professors Bagley and Frazier start by analyzing a recent debate over housing policy before diving into the weeds of the Abundance Agenda, its nexus with AI policy, and what this all means for the future of legal education and governance.
Should AI Laws Be Subject To A Higher Standard? The Right to Compute with Kendall Cotton
39:31|Kendall Cotton, Founder and CEO of Montana’s Frontier Institute, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss Montana’s groundbreaking Right to Compute Act and how Montana hopes to protect access to AI and related technologies. We will discuss the history and reach of this Act and why other states may want to follow Montana's lead.
Why Data Governance Is the Key to AI Biosecurity, with Jassi Pannu and Doni Bloomfield
49:56|Why Data Governance Is the Key to AI Biosecurity, with Jassi Pannu and Doni Bloomfield Alan Rozenshtein, research director at Lawfare, spoke with Jassi Pannu, assistant professor at the Johns Hopkins Bloomberg School of Public Health and senior scholar at the Johns Hopkins Center for Health Security, and Doni Bloomfield, associate professor of law at Fordham Law School, about their proposed framework for governing biological data to reduce AI-enabled biosecurity risks. The conversation covered the origins of the proposal in the 50th anniversary of the 1975 Asilomar conference on recombinant DNA; the distinction between general-purpose AI models and biology-specific foundation models like genomic language models; the biosecurity threats posed by AI, including uplift of novice actors and raising the ceiling of expert capabilities; the proposed biosecurity data levels (BDL 0-4) framework and how it draws on precedents from biosafety levels and genetic privacy regulation; the challenge of capabilities-based rather than pathogen-based data classification; the institutional and regulatory mechanisms for enforcement, including the role of NIH grant conditions and a proposed mandatory federal regime; international collaboration and the importance of U.S. leadership given that most high-tier data is generated domestically; the relationship between the proposal and open-source biological AI development; and the offense-defense imbalance in biosecurity and the case for mandatory gene synthesis screening. Mentioned in this episode:Jassi Pannu and Doni Bloomfield et al., "Biological data governance in an age of AI," Science (2026)Jassi Pannu, Doni Bloomfield, et al., "Dual-use capabilities of concern of biological AI models," PLOS Computational Biology (2025)Dario Amodei, "The Adolescence of Technology" (2026)The Genesis Mission Executive Order (November 2025)