Share

Scaling Laws
A Member of Meta’s Oversight Board Discusses the Board’s New Decision
When Facebook whistleblower Frances Haugen shared a trove of internal company documents to the Wall Street Journal in 2021, some of the most dramatic revelations concerned the company’s use of a so-called “cross-check” system that, according to the Journal, essentially exempted certain high-profile users from the platform’s usual rules. After the Journal published its report, Facebook—which has since changed its name to Meta—asked the platform’s independent Oversight Board to weigh in on the program. And now, a year later, the Board has finally released its opinion.
On this episode of Arbiters of Truth, our series on the online information ecosystem, Lawfare senior editors Alan Rozenshtein and Quinta Jurecic sat down with Suzanne Nossel, a member of the Oversight Board and the CEO of PEN America. She talked us through the Board’s findings, its criticisms of cross-check, and its recommendations for Meta going forward.
More episodes
View all episodes

Live from Ashby: Taking a Long View on AI Governance with Austin Carson and Caleb Whitney
58:24|Kevin Frazier hangs out with Caleb Watney of the Institute for Progress and Austin Carson of SeedAI at the Ashby Workshops to discuss the long-run policy foundations needed for the AI Age.Rather than focusing on near-term regulation, the conversation explores how AI challenges existing assumptions about state capacity, research funding, talent pipelines, and institutional design. Caleb and Austin unpack concepts like meta-science, public compute infrastructure, immigration policy, and congressional expertise—and explain why these “boring” policy areas may matter more for AI outcomes than headline-grabbing rules.The episode also examines how AI policy discourse has evolved in Washington, what lessons policymakers should draw from efforts like the National AI Research Resource, and why many AI governance failures may ultimately be failures of institutions rather than intent.
Scaling Laws x AI Summer: Who Controls the Machine God?
57:40|Alan Rozenshtein, associate professor of law at the University of Minnesota and research director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and senior editor at Lawfare, were joined by Dean Ball, senior fellow at the Foundation for American Innovation and author of the Hyperdimensional newsletter, and Timothy B. Lee, author of the Understanding AI newsletter, for a joint crossover episode of the Scaling Laws and AI Summer podcasts about the escalating dispute between Anthropic and the Pentagon over AI usage restrictions in military contracts.The conversation covered the timeline of the Anthropic-Pentagon dispute and Secretary Hegseth's supply chain risk designation; the legal basis for the designation under 10 U.S.C. § 3252 and whether it was intended to apply to domestic companies; the role of personality and politics in the dispute; OpenAI's competing Pentagon contract and debate over whether its terms actually match Anthropic's red lines; public opinion polling showing bipartisan concern about AI mass surveillance and autonomous weapons; the broader question of what the government-AI industry relationship should look like; the prospect of partial or full nationalization of AI capabilities; and whether frontier AI models are actually decisive for military applications.
In Defense of Optimism with Packy McCormick
46:06|Packy McCormick, founder of Not Boring and Not Boring Capital, joins Kevin Frazier, Director of the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, to discuss the power of narratives in tech, the intersection of investing and policy, and what it means to build frameworks for the future in an age of rapid technological change.
The Pentagon Goes to War With Anthropic
46:02|An impasse is coming to a head. The resolution is unknown. The Department of Defense has made clear that Anthropic has until 5:01pm ET today, February 27th, 2026, to permit its use of Claude for any lawful purpose. CEO Dario Amodei doubled down on his insistence that Anthropic tools should not be used for mass domestic surveillance or the operation of lethal autonomous weapons. The Pentagon's Spokesman agrees that such usage would indeed be unlawful and yet, the two parties cannot come to terms. If the DOD is to be taken at its word, the likely result is that Anthropic will be labled as a supply chain risk--an unprecedented decision with huge business ramifications. Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, joins Kevin Frazier, Senior Fellow at the Abudnance Institute and a Senior Editor at Lawfare, to break this all down.You can also read more on this weighty issue via Alan’s two recent Lawfare pieces here and here.
Can AI Make AI Regulation Cheaper?, with Cullen O'Keefe and Kevin Frazier
51:43|Alan Rozenshtein, research director at Lawfare, spoke with Cullen O'Keefe, research director at the Institute for Law & AI, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas at Austin School of Law and senior editor at Lawfare, about their paper, "Automated Compliance and the Regulation of AI" (and associated Lawfare article), which argues that AI systems can automate many regulatory compliance tasks, loosening the trade-off between safety and innovation in AI policy.The conversation covered the disproportionate burden of compliance costs on startups versus large firms; the limitations of compute thresholds as a proxy for targeting AI regulation; how AI can automate tasks like transparency reporting, model evaluations, and incident disclosure; the Goodhart's Law objection to automated compliance; the paper's proposal for "automatability triggers" that condition regulation on the availability of cheap compliance tools; analogies to sunrise clauses in other areas of law; incentive problems in developing compliance-automating AI; the speculative future of automated compliance meeting automated governance; and how co-authoring the paper shifted each author's views on the AI regulation debate.
Claude's Constitution, with Amanda Askell
47:25|Alan Rozenshtein, research director at Lawfare, and Kevin Frazier, senior editor at Lawfare, spoke with Amanda Askell, head of personality alignment at Anthropic, about Claude's Constitution: a 20,000-word document that describes the values, character, and ethical framework of Anthropic's flagship AI model and plays a direct role in its training.The conversation covered how the constitution is used during supervised learning and reinforcement learning to shape Claude's behavior; analogies to constitutional law, including fidelity to text, the potential for a body of "case law," and the principal hierarchy of Anthropic, operators, and users; the decision to ground the constitution in virtue ethics and practical judgment rather than rigid rules; the document's treatment of Claude's potential moral patienthood and the question of AI personhood; whether the constitution's values are too Western and culturally specific; the tension between Anthropic's commercial incentives and its stated mission; and whether the constitutional approach can generalize to specialized domains like cybersecurity and military applications.
Live from Ashby: Adaptive AI Governance with Gillian Hadfield and Andrew Freedman
54:35|Kevin Frazier sits down with Andrew Freedman of Fathom and Gillian Hadfield, AI governance scholar, at the Ashby Workshops to examine innovative models for AI regulation.They discuss:Why traditional regulation struggles with rapid AI innovation.The concept of Regulatory Markets and how it aligns with the unique governance challenges posed by AI.Critiques of hybrid governance: concerns about a “race to the bottom,” the limits of soft law on catastrophic risks, and how liability frameworks interact with governance.What success looks like for Ashby Workshops and the future of adaptive AI policy design.Whether you’re a policy wonk, technologist, or governance skeptic, this episode bridges ideas and practice in a time of rapid technological change.
The Persuasion Machine: David Rand on How LLMs Can Reshape Political Beliefs
58:05|Alan Rozenshtein, research director at Lawfare, and Renee DiResta, associate research professor at Georgetown University's McCourt School of Public Policy and contributing editor at Lawfare, spoke with David Rand, professor of information science, marketing, and psychology at Cornell University.The conversation covered how inattention to accuracy drives misinformation sharing and the effectiveness of accuracy nudges; how AI chatbots can durably reduce conspiracy beliefs through evidence-based dialogue; research showing that conversational AI can shift voters' candidate preferences, with effect sizes several times larger than traditional political ads; the finding that AI persuasion works through presenting factual claims, but that the claims need not be true to be effective; partisan asymmetries in misinformation sharing; the threat of AI-powered bot swarms on social media; the political stakes of training data and system prompts; and the policy case for transparency requirements. Additional reading:"Durably Reducing Conspiracy Beliefs Through Dialogues with AI" - Science (2024)"Persuading Voters Using Human-Artificial Intelligence Dialogues" - Nature (2025)"The Levers of Political Persuasion with Conversational Artificial Intelligence" Science (2025)"How Malicious AI Swarms Can Threaten Democracy" - Science (2026)
Alan and Kevin join the Cognitive Revolution.
01:31:07|Nathan Labenz, host of the Cognitive Revolution, sat down with Alan and Kevin to talk about the intersection of AI and the law. The trio explore everything from how AI may address the shortage of attorneys in rural communities to the feasibility and desirability of the so-called "Right to Compute." Learn more about the Cognitive Revolution here. It's our second favorite AI podcast!