Latest episode
Matt Perault, Ramya Krishnan, and Alan Rozenshtein Talk About the TikTok Divestment and Ban Bill
50:32|Last week the House of Representatives overwhelmingly passed a bill that would require ByteDance, the Chinese company that owns the popular social media app TikTok, to divest its ownership in the platform or face TikTok being banned in the United States. Although prospects for the bill in the Senate remain uncertain, President Biden has said he will sign the bill if it comes to his desk, and this is the most serious attempt yet to ban the controversial social media app.Today's podcast is the latest in a series of conversations we've had about TikTok. Matt Perault, the Director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, led a conversation with Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, and Ramya Krishnan, a Senior Staff Attorney at the Knight First Amendment Institute at Columbia University. They talked about the First Amendment implications of a TikTok ban, whether it's a good idea as a policy matter, and how we should think about foreign ownership of platforms more generally.Disclaimer: Matt's center receives funding from foundations and tech companies, including funding from TikTok.
More episodes
View all episodes
Jawboning at the Supreme Court
51:38|Today, we’re bringing you an episode of Arbiters of Truth, our series on the information ecosystem.On March 18, the Supreme Court heard oral arguments in Murthy v. Missouri, concerning the potential First Amendment implications of government outreach to social media platforms—what’s sometimes known as jawboning. The case arrived at the Supreme Court with a somewhat shaky evidentiary record, but the legal questions raised by government requests or demands to remove online content are real. To make sense of it all, Lawfare Senior Editor Quinta Jurecic and Matt Perault, the Director of the Center on Technology Policy at UNC-Chapel Hill, called up Alex Abdo, the Litigation Director of the Knight First Amendment Institute at Columbia University. While the law is unsettled, the Supreme Court seemed skeptical of the plaintiffs’ claims of government censorship. But what is the best way to determine what contacts and government requests are and aren't permissible?If you’re interested in more, you can read the Knight Institute’s amicus brief in Murthy here and Knight’s series on jawboning—including Perault’s reflections—here.How Are the TikTok Bans Holding Up in Court?
49:27|In May 2023, Montana passed a new law that would ban the use of TikTok within the state starting on January 1, 2024. But as of today, TikTok is still legal in the state of Montana—thanks to a preliminary injunction issued by a federal district judge, who found that the Montana law likely violated the First Amendment. In Texas, meanwhile, another federal judge recently upheld a more limited ban against the use of TikTok on state-owned devices. What should we make of these rulings, and how should we understand the legal status of efforts to ban TikTok?We’ve discussed the question of TikTok bans and the First Amendment before on the Lawfare Podcast, when Lawfare Senior Editor Alan Rozenshtein and Matt Perault, Director of the Center on Technology Policy at UNC-Chapel Hill, sat down with Ramya Krishnan, a staff attorney at the Knight First Amendment Institute at Columbia University, and Mary-Rose Papandrea, the Samuel Ashe Distinguished Professor of Constitutional Law at the University of North Carolina School of Law. In light of the Montana and Texas rulings, Matt and Lawfare Senior Editor Quinta Jurecic decided to bring the gang back together and talk about where the TikTok bans stand with Ramya and Mary-Rose, on this episode of Arbiters of Truth, our series on the information ecosystem.Jeff Horwitz on Broken Code and Reporting on Facebook
53:58|In 2021, the Wall Street Journal published a monster scoop: a series of articles about Facebook’s inner workings, which showed that employees within the famously secretive company had raised alarms about potential harms caused by Facebook’s products. Now, Jeff Horwitz, the reporter behind that scoop, has a new book out, titled “Broken Code”—which dives even deeper into the documents he uncovered from within the company. He’s one of the most rigorous reporters covering Facebook, now known as Meta.On this episode of Arbiters of Truth, our series on the information ecosystem Lawfare Senior Editor Quinta Jurecic sat down with Jeff along with Matt Perault, the Director of the Center on Technology Policy at UNC-Chapel Hill—and also someone with close knowledge of Meta from his own time working at the company. They discussed Jeff’s reporting and debated what his findings tell us about how Meta functions as a company and how best to understand its responsibilities for harms traced back to its products.Will Generative AI Reshape Elections?
49:03|Unless you’ve been living under a rock, you’ve probably heard a great deal over the last year about generative AI and how it’s going to reshape various aspects of our society. That includes elections. With one year until the 2024 U.S. presidential election, we thought it would be a good time to step back and take a look at how generative AI might and might not make a difference when it comes to the political landscape. Luckily, Matt Perault and Scott Babwah Brennen of the UNC Center on Technology Policy have a new report out on just that subject, examining generative AI and political ads.On this episode of Arbiters of Truth, our series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic and Lawfare’s Fellow in Technology Policy and Law Eugenia Lostri sat down with Matt and Scott to talk through the potential risks and benefits of generative AI when it comes to political advertising. Which concerns are overstated, and which are worth closer attention as we move toward 2024? How should policymakers respond to new uses of this technology in the context of elections?The Crisis Facing Efforts to Counter Election Disinformation
57:00|Over the course of the last two presidential elections, efforts by social media platforms and independent researchers to prevent falsehoods from spreading about election integrity have become increasingly central to civic health. But the warning signs are flashing as we head into 2024. And platforms are arguably in a worse position to counter falsehoods today than they were in 2020. How could this be? On this episode of Arbiters of Truth, our series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic sat down with Dean Jackson, who previously sat down with the Lawfare Podcast to discuss his work as a staffer on the Jan. 6 committee. He worked with the Center on Democracy and Technology to put out a new report on the challenges facing efforts to prevent the spread of election disinformation. They talked through the political, legal, and economic pressures that are making this work increasingly difficult—and what it means for 2024.Talking AI with Data and Society’s Janet Haven
46:22|Today, we’re bringing you an episode of Arbiters of Truth, our series on the information ecosystem. And we’re discussing the hot topic of the moment: artificial intelligence. There are a lot of less-than-informed takes out there about AI and whether it’s going to kill us all—so we’re glad to be able to share an interview that hopefully cuts through some of that noise.Janet Haven is the Executive Director of the nonprofit Data and Society and a member of the National Artificial Intelligence Advisory Committee, which provides guidance to the White House on AI issues. Lawfare Senior Editor Quinta Jurecic sat down alongside Matt Perault, Director of the Center on Technology and Policy at UNC-Chapel Hill, to talk through their questions about AI governance with Janet. They discussed how she evaluates the dangers and promises of artificial intelligence, how to weigh the different concerns posed by possible future existential risk to society posed by AI versus the immediate potential downsides of AI in our everyday lives, and what kind of regulation she’d like to see in this space. If you’re interested in reading further, Janet mentions this paper from Data and Society on “Democratizing AI” in the course of the conversation.