Scaling Laws
All Episodes

Why Data Governance Is the Key to AI Biosecurity, with Jassi Pannu and Doni Bloomfield
49:56|Why Data Governance Is the Key to AI Biosecurity, with Jassi Pannu and Doni Bloomfield Alan Rozenshtein, research director at Lawfare, spoke with Jassi Pannu, assistant professor at the Johns Hopkins Bloomberg School of Public Health and senior scholar at the Johns Hopkins Center for Health Security, and Doni Bloomfield, associate professor of law at Fordham Law School, about their proposed framework for governing biological data to reduce AI-enabled biosecurity risks. The conversation covered the origins of the proposal in the 50th anniversary of the 1975 Asilomar conference on recombinant DNA; the distinction between general-purpose AI models and biology-specific foundation models like genomic language models; the biosecurity threats posed by AI, including uplift of novice actors and raising the ceiling of expert capabilities; the proposed biosecurity data levels (BDL 0-4) framework and how it draws on precedents from biosafety levels and genetic privacy regulation; the challenge of capabilities-based rather than pathogen-based data classification; the institutional and regulatory mechanisms for enforcement, including the role of NIH grant conditions and a proposed mandatory federal regime; international collaboration and the importance of U.S. leadership given that most high-tier data is generated domestically; the relationship between the proposal and open-source biological AI development; and the offense-defense imbalance in biosecurity and the case for mandatory gene synthesis screening. Mentioned in this episode:Jassi Pannu and Doni Bloomfield et al., "Biological data governance in an age of AI," Science (2026)Jassi Pannu, Doni Bloomfield, et al., "Dual-use capabilities of concern of biological AI models," PLOS Computational Biology (2025)Dario Amodei, "The Adolescence of Technology" (2026)The Genesis Mission Executive Order (November 2025)
Rapid Response Pod: Trump's New AI Framework with Helen Toner & Dean Ball
25:23|On Friday, March 20, the Trump Administration announced a National Policy Framework for AI. White House officials have stressed that they want Congress to act on the framework's recommendations within the year. What this all means for AI policy is an open question that warrants calling in two of the smartest folks in the business: Helen Toner, Interim Executive Director at Georgetown's Center for Security and Emerging Technology (CSET), and Dean Ball, a senior fellow at the Foundation for American Innovation. This rapid response episode cuts to the chase as everyone makes sense of this important development in the national AI policy conversation.
Is AI a Death Sentence for Civic Institutions?, with Jessica Silbey and Woodrow Hartzog
53:11|Alan Rozenshtein, research director at Lawfare, spoke with Woodrow Hartzog, the Andrew R. Randall Professor of Law at Boston University School of Law, and Jessica Silbey, Professor of Law and Honorable Frank R. Kenison Distinguished Scholar in Law at Boston University School of Law, about their new paper "How AI Destroys Institutions," which argues that AI systems threaten to erode the civic institutions that organize democratic society. The conversation covered the sociological concept of institutions and why they differ from organizations; the idea of technological affordances from science and technology studies; how AI undermines human expertise through both accuracy and inaccuracy; the cognitive offloading problem and whether AI-driven skill atrophy differs from past technological transitions; whether AI-generated decisions can satisfy the legitimacy requirements of the rule of law; the role of reason-giving, contestation, and political accountability in legal institutions; the tension between the paper's sweeping diagnosis and its more incremental prescriptions; and the case for bespoke, institution-specific AI tools over general-purpose deployment.
Can AI Enable Human Agency?, with Tomicah Tillemann
46:13|Tomicah Tillemann, President at Project Liberty Institute, joins the show. Tomicah offers a unique perspective on regulating emerging technology given his time as a venture capitalist and head of policy at Andreessen Horowitz and Haun Ventures. His contemporary focus is on identifying “policy solutions that enable human agency and human flourishing in an AI-powered world.” It’s a tall order that he breaks down with Kevin Frazier, a Senior Fellow at the Abundance Institute, Adjunct Research Fellow at the Cato Institute, and a Senior Editor at Lawfare.
Live from Ashby: Taking a Long View on AI Governance with Austin Carson and Caleb Watney
58:24|Kevin Frazier hangs out with Caleb Watney of the Institute for Progress and Austin Carson of SeedAI at the Ashby Workshops to discuss the long-run policy foundations needed for the AI Age.Rather than focusing on near-term regulation, the conversation explores how AI challenges existing assumptions about state capacity, research funding, talent pipelines, and institutional design. Caleb and Austin unpack concepts like meta-science, public compute infrastructure, immigration policy, and congressional expertise—and explain why these “boring” policy areas may matter more for AI outcomes than headline-grabbing rules.The episode also examines how AI policy discourse has evolved in Washington, what lessons policymakers should draw from efforts like the National AI Research Resource, and why many AI governance failures may ultimately be failures of institutions rather than intent.
Scaling Laws x AI Summer: Who Controls the Machine God?
57:40|Alan Rozenshtein, associate professor of law at the University of Minnesota and research director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and senior editor at Lawfare, were joined by Dean Ball, senior fellow at the Foundation for American Innovation and author of the Hyperdimensional newsletter, and Timothy B. Lee, author of the Understanding AI newsletter, for a joint crossover episode of the Scaling Laws and AI Summer podcasts about the escalating dispute between Anthropic and the Pentagon over AI usage restrictions in military contracts.The conversation covered the timeline of the Anthropic-Pentagon dispute and Secretary Hegseth's supply chain risk designation; the legal basis for the designation under 10 U.S.C. § 3252 and whether it was intended to apply to domestic companies; the role of personality and politics in the dispute; OpenAI's competing Pentagon contract and debate over whether its terms actually match Anthropic's red lines; public opinion polling showing bipartisan concern about AI mass surveillance and autonomous weapons; the broader question of what the government-AI industry relationship should look like; the prospect of partial or full nationalization of AI capabilities; and whether frontier AI models are actually decisive for military applications.
In Defense of Optimism with Packy McCormick
46:06|Packy McCormick, founder of Not Boring and Not Boring Capital, joins Kevin Frazier, Director of the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, to discuss the power of narratives in tech, the intersection of investing and policy, and what it means to build frameworks for the future in an age of rapid technological change.
The Pentagon Goes to War With Anthropic
46:02|An impasse is coming to a head. The resolution is unknown. The Department of Defense has made clear that Anthropic has until 5:01pm ET today, February 27th, 2026, to permit its use of Claude for any lawful purpose. CEO Dario Amodei doubled down on his insistence that Anthropic tools should not be used for mass domestic surveillance or the operation of lethal autonomous weapons. The Pentagon's Spokesman agrees that such usage would indeed be unlawful and yet, the two parties cannot come to terms. If the DOD is to be taken at its word, the likely result is that Anthropic will be labled as a supply chain risk--an unprecedented decision with huge business ramifications. Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, joins Kevin Frazier, Senior Fellow at the Abudnance Institute and a Senior Editor at Lawfare, to break this all down.You can also read more on this weighty issue via Alan’s two recent Lawfare pieces here and here.
Can AI Make AI Regulation Cheaper?, with Cullen O'Keefe and Kevin Frazier
51:43|Alan Rozenshtein, research director at Lawfare, spoke with Cullen O'Keefe, research director at the Institute for Law & AI, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas at Austin School of Law and senior editor at Lawfare, about their paper, "Automated Compliance and the Regulation of AI" (and associated Lawfare article), which argues that AI systems can automate many regulatory compliance tasks, loosening the trade-off between safety and innovation in AI policy.The conversation covered the disproportionate burden of compliance costs on startups versus large firms; the limitations of compute thresholds as a proxy for targeting AI regulation; how AI can automate tasks like transparency reporting, model evaluations, and incident disclosure; the Goodhart's Law objection to automated compliance; the paper's proposal for "automatability triggers" that condition regulation on the availability of cheap compliance tools; analogies to sunrise clauses in other areas of law; incentive problems in developing compliance-automating AI; the speculative future of automated compliance meeting automated governance; and how co-authoring the paper shifted each author's views on the AI regulation debate.
loading...