{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/60baafd7d3cdd0001b29d9ee/6553de80af09e000129a4981?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"Deepfakes and Human Subjects Protection with Aimee Nishimura","description":"<p>The use of deepfakes—a form of artificial intelligence known as deep learning to create manipulated or generated images, video, and audio—is on the rise. In 2022, the U.S. military took a nearly unprecedented step by declaring its interest in deepfake technology for offensive purposes. But the Defense Department’s exploration of this technology poses privacy and ethics risks, especially with respect to human subjects research.</p><p>To unpack all of this and more, <em>Lawfare</em> Associate Editor Katherine Pompilio sat down with Aimee Nishimura, a Cyber Student Fellow at the Strauss Center for International Security and Law at UT Austin. Aimee recently published a&nbsp;<a href=\"https://www.lawfaremedia.org/article/human-subjects-protection-in-the-era-of-deepfakes\" rel=\"noopener noreferrer\" target=\"_blank\">piece on <em>Lawfare</em></a>, entitled “Human Subjects Protection in the Era of Deepfakes.” They discussed the significant dangers posed by deepfakes, how the Defense Department can support the protection of human subjects in its research on the technology, and more.</p>","author_name":"The Lawfare Institute"}