{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/8cf4cec7-5a0f-49c5-8ec9-36941b5c6b6e/69691e412f4375874a59d95b?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"Grok, deepfakes and who should police AI","thumbnail_width":200,"thumbnail_height":200,"thumbnail_url":"https://open-images.acast.com/shows/61ba0b311a8cbef1d93cf121/1768496348613-d88f3e0d-aa75-459c-96c8-22567cb07d66.jpeg?height=200","description":"<p>What happens when AI gets it wrong? After a backlash over the misuse of Elon Musk’s AI tool Grok, new restrictions have been imposed on editing images of real people. Is this a sign that AI regulation is lagging, and who should be in charge – governments or Silicon Valley? This week, Danny and Katie are joined by AI computer scientist Kate Devlin from King’s College London to discuss why this moment could be a turning point for global AI rules.</p><p><br></p><p><strong>Image: </strong>Getty </p>","author_name":"The Sunday Times"}