Scaling Laws
All Episodes

Forecasting AI's Impact on the Economy with Deger Turan, CEO of Metaculus
53:30|Deger Turan, CEO of Metaculus, joins Kevin Frazier to unpack new forecasts on how AI could reshape the labor market over the next decade.The conversation centers on a striking divergence between Metaculus forecasts and projections from institutions like the Bureau of Labor Statistics—raising fundamental questions about whether existing tools for understanding the economy can keep pace with rapid technological change.Deger walks through key findings from the Labor Automation Forecasting Hub, including:A potential decline in overall employment by 2035Increased pressure on entry-level workers and early-career pipelinesThe emergence of “lean” firms generating more value with fewer employeesA counterintuitive “wage paradox,” where fewer jobs may coincide with higher wagesThe growing role of political power, regulation, and licensing in shaping labor outcomesThe discussion also explores second-order effects, including how contraction in high-paying sectors could ripple through local economies, and what a shift away from traditional four-year degrees might mean for students and policymakers.Finally, Deger situates these forecasts within a broader vision: forecasting as a form of epistemic infrastructure. As AI accelerates change, the ability to form accurate beliefs about the future—and update them quickly—may become a core component of effective governance.*** - This episode was recorded on April 23, 2026. Metaculus is a live platform. It's likely that forecasts mentioned have subsequently changed.
Rapid Response: An "FDA for AI" at the White House?, with Dean Ball
33:11|Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and Senior Editor at Lawfare, spoke with Dean Ball, Senior Fellow at the Foundation for American Innovation and former Senior Policy Advisor for AI at the White House Office of Science and Technology Policy, about the Trump administration's reported plans to vet frontier AI models before public release.They discussed how Anthropic's Mythos model reshaped the administration's posture on AI risk; why the executive branch lacks clear legal authority for a mandatory pre-deployment vetting regime; the voluntary "kick the tires" framework Frazier and Ball have proposed using CAISI and the Cyber Resilience Fund; whether an FDA-style licensing regime is ultimately inevitable for frontier AI; and the institutional design challenges of building AI oversight that can scale with rapidly improving model capabilities.
Lawfare Daily: Why AI Won’t Revolutionize Law (At Least Not Yet), with Arvind Narayanan and Justin Curl
44:21|Alan Rozenshtein, research director at Lawfare, speaks with Justin Curl, a third-year J.D. candidate at Harvard Law School, and Arvind Narayanan, professor of computer science at Princeton University and director of the Center for Information Technology Policy, about their new Lawfare research report, “AI Won't Automatically Make Legal Services Cheaper,” co-authored with Princeton Ph.D. candidate Sayash Kapoor.The report argues that despite AI's impressive capabilities, structural features of the legal profession will prevent the technology from delivering dramatic cost savings anytime soon. The conversation covered the "AI as normal technology" framework and why technological diffusion takes longer than capability gains suggest; why legal services are expensive due to their nature as credence goods, adversarial dynamics, and professional regulations; three bottlenecks preventing AI from reducing legal costs, including unauthorized practice of law rules, arms-race dynamics in litigation, and the need for human oversight; proposed reforms such as regulatory sandboxes and regulatory markets; and the normative case for keeping human decision-makers in the judicial system.
An EU-perspective on America’s Approach to AI with Marietje Schaake
45:07|In this episode of Scaling Laws, Kate Klonick, Associate Professor of Law at St. John’s University and a fellow at the Brookings Institution, and Kevin Frazier, Director of the AI Innovation and Law Program at the University of Texas School of Law and a senior fellow at the Abundance Institute, are joined by Marietje Schaake, the International Policy Director at Stanford University’s Cyber Policy Center and author of The Tech Coup: How to Save Democracy from Silicon Valley. A former Member of the European Parliament, Schaake has long been a leading architect of digital rights and tech governance.Their conversation explores the central thesis of her work: that a handful of tech giants have effectively staged a "coup" over democratic functions, from national security to the very infrastructure of public discourse. They examine the democratic implications of AI development, the "privatization of policy," and why Schaake believes that without urgent intervention, the "rule of law" is being replaced by the "rule of code."To get in touch with us, email scalinglaws@lawfaremedia.org. Logan Le-Jeffries, a member of the AI Wranglers student program at the University of Texas School of Law, provided research assistance with this episode.
Eliminating Barriers to AI Adoption with Clarion AI's Bennett Borden
50:15|Bennett Borden, Founder and CEO of Clarion AI Partners, joins Kevin Frazier, the AI Innovation and Law Fellow at UT and a Senior Fellow at the Abundance Institute, to discuss AI adoption as well as the future of the law and legal practice. The two explore Bennett’s unique background, Clarion’s AI interdisciplinary approach, and the importance of AI adoption. They also cover innovative work underway at major AI labs to align model use with user expectations.
Facts & Myths About AI's Energy Usage with Gavin McCormick
49:31|In this episode of Scaling Laws, we explore how the "black box" of global greenhouse gas emissions is being cracked open by artificial intelligence and satellite imagery. Kevin Frazier, Director of the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, talks with Gavin McCormick, the founder of ClimateTrace, a global coalition that has revolutionized the process of identifying and quantifying emissions.For decades, climate policy has relied on self-reported data from nations and corporations—a system prone to gaps and "greenwashing." McCormick’s work leverages machine learning to monitor every major source of emissions on Earth in near real-time. We discuss the legal implications of "radical transparency," how AI-driven data can be used to enforce regulations and measure claims, and the myths and facts of AI’s environmental consequences. To get in touch with us, email scalinglaws@lawfaremedia.org.Logan Le-Jeffries, a member of the AI Wranglers student program at the University of Texas School of Law, provided research assistance with this episode.
AI as Abnormal Technology? Scott Sullivan Analyzes AI in the Military Domain
45:28|Scott Sullivan, professor of law at the U.S. Military Academy at West Point and a leading contributor to the Manual on International Law Applicable to Artificial Intelligence in Warfare, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to examine whether AI should be understood as a “normal” or “abnormal” technology.Drawing on his recent article, Sullivan argues that while AI may diffuse slowly and unevenly in civilian contexts, military AI operates under fundamentally different conditions—where strategic competition rewards speed, costs are often externalized, and meaningful oversight is limited by secrecy and epistemic uncertainty.The conversation explores how these dynamics challenge prevailing AI governance frameworks, what current military deployments reveal about the trajectory of AI adoption, and whether existing legal and policy tools are equipped to manage a domain where the pace of technological integration may outstrip the institutions designed to constrain it.
Lawfare Daily: Talking About Sam Altman with Ronan Farrow and Andrew Marantz
49:37|Senior Editor Kate Klonick interviews reporters Ronan Farrow and Andrew Marantz on their recent article in the New Yorker, titled “Sam Altman May Control Our Future—Can He Be Trusted?” In their 16,000-word piece, Farrow and Marantz create a cohesive narrative with receipts around Sam Altman, the products he's building at OpenAI, and how he's selling them not just to investors and the public, but also to regulators and world leaders.Klonick unpacks three key areas that are discussed in the piece: potential concerns of fraud, ongoing trust and safety and alignment issues at OpenAI, and the national security concerns that the article exposes in the "country plan" and Altman's entanglements in the Gulf. The discussion ends with a basic question: Are any of these legal issues enough to stop or correct the course of OpenAI, with its estimated $1T IPO in the coming weeks?
Why AI Needs Independent Auditors, with Miles Brundage
53:06|Alan Rozenshtein, research director at Lawfare, spoke with Miles Brundage, founding executive director of the AI Verification and Evaluation Research Institute (AVERI) and former senior advisor for AGI readiness at OpenAI, about the state of AI safety and accountability and AVERI's vision for independent third-party auditing of frontier AI companies.The conversation covered the weaknesses of current AI regulations, including California's SB 53 and New York's RAISE Act; why Brundage left OpenAI to build an independent nonprofit; AVERI's case for shifting the unit of analysis from individual AI models to the organizations that build them; the "Volkswagen problem" of deception-proofing safety evaluations; a framework of AI Assurance Levels ranging from baseline transparency to treaty-grade verification; the limitations of safety benchmarks and the BenchRisk project's findings; market-based mechanisms for driving audit adoption, including insurance, procurement, and investor pressure; and how AVERI navigates the tension between proximity to industry and independence from it.Mentioned in this episode: Frontier AI Auditing: Toward Rigorous Third-Party Assessment of Safety and Security Practices at Leading AI Companies, Averi 2026Risk Management for Mitigating Benchmark Failure Modes: BenchRisk, NeurIPS 2025Why I'm Leaving OpenAI and What I'm Doing Next, Miles Brundage, Substack, October 2024
loading...