Share

cover art for Israel’s 'Cyber Unit' and Extra-legal Content Take-downs

Scaling Laws

Israel’s 'Cyber Unit' and Extra-legal Content Take-downs


Odds are, you probably haven’t heard of the Israeli government’s “Cyber Unit,” but it’s worth paying attention to whether or not you live in Israel and the Palestinian territories. It’s an entity that, among other things, reaches out to major online platforms like Facebook and Twitter with requests that the platforms remove content. It’s one of a number of such agencies around the globe, which are known as Internet Referral Units. Earlier in April, the Israeli Supreme Court gave a green light to the unit’s activities, rejecting a legal challenge that charged the unit with infringing on constitutional rights.

This week on Arbiters of Truth, the Lawfare Podcast’s miniseries on our online information ecosystem, Evelyn Douek and Quinta Jurecic talked to Fady Khoury and Rabea Eghbariah, who were part of the legal team that challenged the Cyber Unit’s work on behalf of Adalah, the Legal Center for Arab and Minority Rights in Israel. Why do they—and many other human rights activists–find Internet Referral Units so troubling, and why do governments like the units so much? Why did the Israeli Supreme Court disagree with Fady and Rabea’s challenge to the unit’s activities? And what does the Court’s decision say about the developing relationship between countries’ legal systems and platform content moderation systems?

More episodes

View all episodes

  • Claude's Constitution, with Amanda Askell

    47:25|
  • Live from Ashby: Adaptive AI Governance with Gillian Hadfield and Andrew Freedman

    54:35|
    Kevin Frazier sits down with Andrew Freedman of Fathom and Gillian Hadfield, AI governance scholar, at the Ashby Workshops to examine innovative models for AI regulation.They discuss:Why traditional regulation struggles with rapid AI innovation.The concept of Regulatory Markets and how it aligns with the unique governance challenges posed by AI.Critiques of hybrid governance: concerns about a “race to the bottom,” the limits of soft law on catastrophic risks, and how liability frameworks interact with governance.What success looks like for Ashby Workshops and the future of adaptive AI policy design.Whether you’re a policy wonk, technologist, or governance skeptic, this episode bridges ideas and practice in a time of rapid technological change.
  • The Persuasion Machine: David Rand on How LLMs Can Reshape Political Beliefs

    58:05|
    Alan Rozenshtein, research director at Lawfare, and Renee DiResta, associate research professor at Georgetown University's McCourt School of Public Policy and contributing editor at Lawfare, spoke with David Rand, professor of information science, marketing, and psychology at Cornell University.The conversation covered how inattention to accuracy drives misinformation sharing and the effectiveness of accuracy nudges; how AI chatbots can durably reduce conspiracy beliefs through evidence-based dialogue; research showing that conversational AI can shift voters' candidate preferences, with effect sizes several times larger than traditional political ads; the finding that AI persuasion works through presenting factual claims, but that the claims need not be true to be effective; partisan asymmetries in misinformation sharing; the threat of AI-powered bot swarms on social media; the political stakes of training data and system prompts; and the policy case for transparency requirements. Additional reading:"Durably Reducing Conspiracy Beliefs Through Dialogues with AI" - Science (2024)"Persuading Voters Using Human-Artificial Intelligence Dialogues" - Nature (2025)"The Levers of Political Persuasion with Conversational Artificial Intelligence" Science (2025)"How Malicious AI Swarms Can Threaten Democracy" - Science (2026)
  • Alan and Kevin join the Cognitive Revolution.

    01:31:07|
    Nathan Labenz, host of the Cognitive Revolution, sat down with Alan and Kevin to talk about the intersection of AI and the law. The trio explore everything from how AI may address the shortage of attorneys in rural communities to the feasibility and desirability of the so-called "Right to Compute."  Learn more about the Cognitive Revolution here. It's our second favorite AI podcast!
  • Is this your last "job"? The AI Economy With AEI's Brent Orrell

    51:03|
    Most folks agree that AI is going to drastically change our economy, the nature of work, and the labor market. What's unclear is when those changes will take place and how best Americans can navigate the transition.  Brent Orrell, senior fellow at the American Enterprise Institute, joins Kevin Frazier, a Senior Fellow at the Abundance Institute, the Director of the AI Innovation and Law Fellow at the University of Texas School of Law, and a Senior Editor at Lawfare, to help tackle these and other weighty questions.Orrell has been studying the future of work since before it was cool. His two cents are very much worth a nickel in this important conversation. Send us your feedback (scalinglaws@lawfaremedia.org) and leave us a review!
  • Rapid Response Pod on The Implications of Claude's New Constitution

    55:44|
    Jakub Kraus, a Tarbell Fellow at Lawfare, spoke with Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Research Director at Lawfare, and Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law, a Senior Fellow at the Abundance Institute, and a Senior Editor at Lawfare, about Anthropic's newly released "constitution" for its AI model, Claude. The conversation covered the lengthy document's principles and underlying philosophical views, what these reveal about Anthropic's approach to AI development, how market forces are shaping the AI industry, and the weighty question of whether an AI model might ever be a conscious or morally relevant being. Mentioned in this episode:Kevin Frazier, "Interpreting Claude's Constitution," LawfareAlan Rozenshtein, "The Moral Education of an Alien Mind," Lawfare
  • The Honorable AI? Shlomo Klapper Talks Judicial Use of AI

    42:41|
    Shlomo Klapper, founder of Learned Hand, joins Kevin Frazier, the Director of the AI Innovation and Law Fellow at the University of Texas School of Law, a Senior Fellow at the Abundance Institute, and a Senior Editor at Lawfare, to discuss the rise of judicial AI, the challenges of scaling technology inside courts, and the implications for legitimacy, due process, and access to justice.
  • How AI Can Transform Local Criminal Justice, with Francis Shen

    51:48|
    Alan Rozenshtein, research director at Lawfare, spoke with Francis Shen, Professor of Law at the University of Minnesota, director of the Shen Neurolaw Lab, and candidate for Hennepin County Attorney.The conversation covered the intersection of neuroscience, AI, and criminal justice; how AI tools can improve criminal investigations and clearance rates; the role of AI in adjudication and plea negotiations; precision sentencing and individualized justice; the ethical concerns around AI bias, fairness, and surveillance; the practical challenges of implementing AI systems in local government; building institutional capacity and public trust; and the future of the prosecutor's office in an AI-augmented justice system.
  • Release Schedules and Iterative Deployment with Open AI's Ziad Reslan

    51:08|
    Ziad Reslan, a member of OpenAI’s Product Policy Staff and a Senior Fellow with the Schmidt Program on Artificial Intelligence, Emerging Technologies, and National Power at Yale University, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to talk about iterative deployment--the lab’s approach to testing and deploying its models. It’s a complex and, at times, controversial approach.  Ziad provides the rationale behind iterative deployment and tackles some questions about whether the strategy has always worked as intended.