Share

cover art for Ravi Iyer on How to Improve Technology Through Design

Scaling Laws

Ravi Iyer on How to Improve Technology Through Design

On the latest episode of Arbiters of Truth, Lawfare's series on the information ecosystem, Quinta Jurecic and Alan Rozenshtein spoke with Ravi Iyer, the Managing Director of the Psychology of Technology Institute at the University of Southern California's Neely Center.

Earlier in his career, Ravi held a number of positions at Meta, where he worked to make Facebook's algorithm provide actual value, not just "engagement," to users. Quinta and Alan spoke with Ravi about why he thinks that content moderation is a dead-end and why thinking about the design of technology is the way forward to make sure that technology serves us and not the other way around.

More episodes

View all episodes

  • Eliminating Barriers to AI Adoption with Clarion AI's Bennett Borden

    50:15|
    Bennett Borden, Founder and CEO of Clarion AI Partners, joins Kevin Frazier, the AI Innovation and Law Fellow at UT and a Senior Fellow at the Abundance Institute, to discuss AI adoption as well as the future of the law and legal practice. The two explore Bennett’s unique background, Clarion’s AI interdisciplinary approach, and the importance of AI adoption. They also cover innovative work underway at major AI labs to align model use with user expectations. 
  • Facts & Myths About AI's Energy Usage with Gavin McCormick

    49:31|
    In this episode of Scaling Laws, we explore how the "black box" of global greenhouse gas emissions is being cracked open by artificial intelligence and satellite imagery. Kevin Frazier, Director of the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, talks with Gavin McCormick, the founder of ClimateTrace, a global coalition that has revolutionized the process of identifying and quantifying emissions.For decades, climate policy has relied on self-reported data from nations and corporations—a system prone to gaps and "greenwashing." McCormick’s work leverages machine learning to monitor every major source of emissions on Earth in near real-time. We discuss the legal implications of "radical transparency," how AI-driven data can be used to enforce regulations and measure claims, and the myths and facts of AI’s environmental consequences. To get in touch with us, email scalinglaws@lawfaremedia.org.Logan Le-Jeffries, a member of the AI Wranglers student program at the University of Texas School of Law, provided research assistance with this episode.
  • AI as Abnormal Technology? Scott Sullivan Analyzes AI in the Military Domain

    45:28|
    Scott Sullivan, professor of law at the U.S. Military Academy at West Point and a leading contributor to the Manual on International Law Applicable to Artificial Intelligence in Warfare, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to examine whether AI should be understood as a “normal” or “abnormal” technology.Drawing on his recent article, Sullivan argues that while AI may diffuse slowly and unevenly in civilian contexts, military AI operates under fundamentally different conditions—where strategic competition rewards speed, costs are often externalized, and meaningful oversight is limited by secrecy and epistemic uncertainty.The conversation explores how these dynamics challenge prevailing AI governance frameworks, what current military deployments reveal about the trajectory of AI adoption, and whether existing legal and policy tools are equipped to manage a domain where the pace of technological integration may outstrip the institutions designed to constrain it.
  • Lawfare Daily: Talking About Sam Altman with Ronan Farrow and Andrew Marantz

    49:37|
    Senior Editor Kate Klonick interviews reporters Ronan Farrow and Andrew Marantz on their recent article in the New Yorker, titled “Sam Altman May Control Our Future—Can He Be Trusted?” In their 16,000-word piece, Farrow and Marantz create a cohesive narrative with receipts around Sam Altman, the products he's building at OpenAI, and how he's selling them not just to investors and the public, but also to regulators and world leaders.Klonick unpacks three key areas that are discussed in the piece: potential concerns of fraud, ongoing trust and safety and alignment issues at OpenAI, and the national security concerns that the article exposes in the "country plan" and Altman's entanglements in the Gulf. The discussion ends with a basic question: Are any of these legal issues enough to stop or correct the course of OpenAI, with its estimated $1T IPO in the coming weeks?
  • Why AI Needs Independent Auditors, with Miles Brundage

    53:06|
    Alan Rozenshtein, research director at Lawfare, spoke with Miles Brundage, founding executive director of the AI Verification and Evaluation Research Institute (AVERI) and former senior advisor for AGI readiness at OpenAI, about the state of AI safety and accountability and AVERI's vision for independent third-party auditing of frontier AI companies.The conversation covered the weaknesses of current AI regulations, including California's SB 53 and New York's RAISE Act; why Brundage left OpenAI to build an independent nonprofit; AVERI's case for shifting the unit of analysis from individual AI models to the organizations that build them; the "Volkswagen problem" of deception-proofing safety evaluations; a framework of AI Assurance Levels ranging from baseline transparency to treaty-grade verification; the limitations of safety benchmarks and the BenchRisk project's findings; market-based mechanisms for driving audit adoption, including insurance, procurement, and investor pressure; and how AVERI navigates the tension between proximity to industry and independence from it.Mentioned in this episode: Frontier AI Auditing: Toward Rigorous Third-Party Assessment of Safety and Security Practices at Leading AI Companies, Averi 2026Risk Management for Mitigating Benchmark Failure Modes: BenchRisk, NeurIPS 2025Why I'm Leaving OpenAI and What I'm Doing Next, Miles Brundage, Substack, October 2024
  • Productivity Boom? Labor Shock? Google's Chief Economist on AI

    50:50|
    Fabien Curto Millet, Chief Economist at Google, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, and Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, to discuss the potential of AI to catalyze a productivity boom while also addressing labor market instability. The three dive into likely changes in AI capabilities as well as ongoing reasons for slow organizational adoption of AI. Finally, they close with a brief discussion of potential policy approaches. 
  • Abundance & AI? Nicholas Bagley Explains

    43:50|
    Nicholas Bagley, Professor of Law at Michigan Law, joins Kevin Frazier, Director of the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, for a live recording of the podcast in Ann Arbor. Thanks to Graham Hardig and Brinson Elliott for organizing a great event. Professors Bagley and Frazier start by analyzing a recent debate over housing policy before diving into the weeds of the Abundance Agenda, its nexus with AI policy, and what this all means for the future of legal education and governance.
  • How To Use, Govern, And Lead On AI? Rep. Begich Points The Path Forward

    46:07|
    Representative Nick Begich, Alaska's at-large member of Congress, joins Kevin Frazier, Director the the AI Innovation and Law Program at the University of Texas School of Law and a Senior Fellow at the Abundance Institute, to discuss the current state of AI policy on the Hill. As one of the few members of Congress with a background in tech, Rep. Begich offers a unique perspective on this unique and evolving regulatory question. The two also assess how Alaska may be a leader in developing AI infrastructure. Finally, Rep. Begich shares how he and his staff leverage AI to improve their own operations. 
  • Should AI Laws Be Subject To A Higher Standard? The Right to Compute with Kendall Cotton

    39:31|
    Kendall Cotton, Founder and CEO of Montana’s Frontier Institute, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss Montana’s groundbreaking Right to Compute Act and how Montana hopes to protect access to AI and related technologies. We will discuss the history and reach of this Act and why other states may want to follow Montana's lead.