Share

Scaling Laws
The Crisis Facing Efforts to Counter Election Disinformation
Over the course of the last two presidential elections, efforts by social media platforms and independent researchers to prevent falsehoods from spreading about election integrity have become increasingly central to civic health. But the warning signs are flashing as we head into 2024. And platforms are arguably in a worse position to counter falsehoods today than they were in 2020.
How could this be? On this episode of Arbiters of Truth, our series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic sat down with Dean Jackson, who previously sat down with the Lawfare Podcast to discuss his work as a staffer on the Jan. 6 committee. He worked with the Center on Democracy and Technology to put out a new report on the challenges facing efforts to prevent the spread of election disinformation. They talked through the political, legal, and economic pressures that are making this work increasingly difficult—and what it means for 2024.
More episodes
View all episodes

Claude's Constitution, with Amanda Askell
47:25|Alan Rozenshtein, research director at Lawfare, and Kevin Frazier, senior editor at Lawfare, spoke with Amanda Askell, head of personality alignment at Anthropic, about Claude's Constitution: a 20,000-word document that describes the values, character, and ethical framework of Anthropic's flagship AI model and plays a direct role in its training.The conversation covered how the constitution is used during supervised learning and reinforcement learning to shape Claude's behavior; analogies to constitutional law, including fidelity to text, the potential for a body of "case law," and the principal hierarchy of Anthropic, operators, and users; the decision to ground the constitution in virtue ethics and practical judgment rather than rigid rules; the document's treatment of Claude's potential moral patienthood and the question of AI personhood; whether the constitution's values are too Western and culturally specific; the tension between Anthropic's commercial incentives and its stated mission; and whether the constitutional approach can generalize to specialized domains like cybersecurity and military applications.
Live from Ashby: Adaptive AI Governance with Gillian Hadfield and Andrew Freedman
54:35|Kevin Frazier sits down with Andrew Freedman of Fathom and Gillian Hadfield, AI governance scholar, at the Ashby Workshops to examine innovative models for AI regulation.They discuss:Why traditional regulation struggles with rapid AI innovation.The concept of Regulatory Markets and how it aligns with the unique governance challenges posed by AI.Critiques of hybrid governance: concerns about a “race to the bottom,” the limits of soft law on catastrophic risks, and how liability frameworks interact with governance.What success looks like for Ashby Workshops and the future of adaptive AI policy design.Whether you’re a policy wonk, technologist, or governance skeptic, this episode bridges ideas and practice in a time of rapid technological change.
The Persuasion Machine: David Rand on How LLMs Can Reshape Political Beliefs
58:05|Alan Rozenshtein, research director at Lawfare, and Renee DiResta, associate research professor at Georgetown University's McCourt School of Public Policy and contributing editor at Lawfare, spoke with David Rand, professor of information science, marketing, and psychology at Cornell University.The conversation covered how inattention to accuracy drives misinformation sharing and the effectiveness of accuracy nudges; how AI chatbots can durably reduce conspiracy beliefs through evidence-based dialogue; research showing that conversational AI can shift voters' candidate preferences, with effect sizes several times larger than traditional political ads; the finding that AI persuasion works through presenting factual claims, but that the claims need not be true to be effective; partisan asymmetries in misinformation sharing; the threat of AI-powered bot swarms on social media; the political stakes of training data and system prompts; and the policy case for transparency requirements. Additional reading:"Durably Reducing Conspiracy Beliefs Through Dialogues with AI" - Science (2024)"Persuading Voters Using Human-Artificial Intelligence Dialogues" - Nature (2025)"The Levers of Political Persuasion with Conversational Artificial Intelligence" Science (2025)"How Malicious AI Swarms Can Threaten Democracy" - Science (2026)
Alan and Kevin join the Cognitive Revolution.
01:31:07|Nathan Labenz, host of the Cognitive Revolution, sat down with Alan and Kevin to talk about the intersection of AI and the law. The trio explore everything from how AI may address the shortage of attorneys in rural communities to the feasibility and desirability of the so-called "Right to Compute." Learn more about the Cognitive Revolution here. It's our second favorite AI podcast!
Is this your last "job"? The AI Economy With AEI's Brent Orrell
51:03|Most folks agree that AI is going to drastically change our economy, the nature of work, and the labor market. What's unclear is when those changes will take place and how best Americans can navigate the transition. Brent Orrell, senior fellow at the American Enterprise Institute, joins Kevin Frazier, a Senior Fellow at the Abundance Institute, the Director of the AI Innovation and Law Fellow at the University of Texas School of Law, and a Senior Editor at Lawfare, to help tackle these and other weighty questions.Orrell has been studying the future of work since before it was cool. His two cents are very much worth a nickel in this important conversation. Send us your feedback (scalinglaws@lawfaremedia.org) and leave us a review!
Rapid Response Pod on The Implications of Claude's New Constitution
55:44|Jakub Kraus, a Tarbell Fellow at Lawfare, spoke with Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Research Director at Lawfare, and Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law, a Senior Fellow at the Abundance Institute, and a Senior Editor at Lawfare, about Anthropic's newly released "constitution" for its AI model, Claude. The conversation covered the lengthy document's principles and underlying philosophical views, what these reveal about Anthropic's approach to AI development, how market forces are shaping the AI industry, and the weighty question of whether an AI model might ever be a conscious or morally relevant being. Mentioned in this episode:Kevin Frazier, "Interpreting Claude's Constitution," LawfareAlan Rozenshtein, "The Moral Education of an Alien Mind," Lawfare
The Honorable AI? Shlomo Klapper Talks Judicial Use of AI
42:41|Shlomo Klapper, founder of Learned Hand, joins Kevin Frazier, the Director of the AI Innovation and Law Fellow at the University of Texas School of Law, a Senior Fellow at the Abundance Institute, and a Senior Editor at Lawfare, to discuss the rise of judicial AI, the challenges of scaling technology inside courts, and the implications for legitimacy, due process, and access to justice.
How AI Can Transform Local Criminal Justice, with Francis Shen
51:48|Alan Rozenshtein, research director at Lawfare, spoke with Francis Shen, Professor of Law at the University of Minnesota, director of the Shen Neurolaw Lab, and candidate for Hennepin County Attorney.The conversation covered the intersection of neuroscience, AI, and criminal justice; how AI tools can improve criminal investigations and clearance rates; the role of AI in adjudication and plea negotiations; precision sentencing and individualized justice; the ethical concerns around AI bias, fairness, and surveillance; the practical challenges of implementing AI systems in local government; building institutional capacity and public trust; and the future of the prosecutor's office in an AI-augmented justice system.
Release Schedules and Iterative Deployment with Open AI's Ziad Reslan
51:08|Ziad Reslan, a member of OpenAI’s Product Policy Staff and a Senior Fellow with the Schmidt Program on Artificial Intelligence, Emerging Technologies, and National Power at Yale University, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to talk about iterative deployment--the lab’s approach to testing and deploying its models. It’s a complex and, at times, controversial approach. Ziad provides the rationale behind iterative deployment and tackles some questions about whether the strategy has always worked as intended.