{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/64ece51b02b9ed00119b7f58/654eb9a446e5c90011926e34?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"AI Safety Summit: Cracking the code of risk mitigation 🔴 AINTS 012","thumbnail_width":200,"thumbnail_height":200,"thumbnail_url":"https://open-images.acast.com/shows/64ece51b02b9ed00119b7f58/1699653515792-817896709034c671e581ea76a952f868.jpeg?height=200","description":"<p>This week, Tristan and Tasia unpack the highlights and key outcomes from the UK's recent AI Safety Summit, where global leaders, tech giants, and AI researchers convened to forge the Bletchley Declaration. We dive into the debate over open-source AI, the commitments from major tech companies to preemptively test AI products, and the challenges of aligning international AI regulations, particularly around frontier AI. Join us as we discuss whether these collaborative efforts to secure a safe future with AI will actually be effective, given various competing geopolitical priorities and relentless technological advancements. Also: another Microsoft AI whoopsie-doopsie. 😬</p><p><br></p><p>FOLLOW</p><ul><li><a href=\"https://www.ainamedthisshow.com/links\" rel=\"noopener noreferrer\" target=\"_blank\"><em>AI Named This Show</em></a></li><li><a href=\"https://www.ainamedthisshow.com/about\" rel=\"noopener noreferrer\" target=\"_blank\">Tristan &amp; Tasia</a></li><li><a href=\"https://www.ainamedthisshow.com/podcast\" rel=\"noopener noreferrer\" target=\"_blank\"><em>AI Named This Show</em>&nbsp;podcast</a></li></ul><p>FOLLOW-UP</p><ul><li><a href=\"https://ow.ly/I0LY50Q6AQG\" rel=\"noopener noreferrer\" target=\"_blank\">Watermarks aren’t the silver bullet for AI misinformation</a></li><li><a href=\"https://ow.ly/Uf8g50Q6AQA\" rel=\"noopener noreferrer\" target=\"_blank\">NIST Opens Application Phase for U.S. AI Safety Institute Consortium</a></li><li><a href=\"https://ow.ly/SYIo50Q6AQm\" rel=\"noopener noreferrer\" target=\"_blank\">Geoffrey Hinton: \"I suspect that Andrew Ng and Yann LeCun have missed the main reason why the big companies want regulations.\"</a></li></ul><p>AI NEWS</p><ul><li><a href=\"https://ow.ly/vnUh50Q6AQb\" rel=\"noopener noreferrer\" target=\"_blank\">Microsoft AI inserted a distasteful poll into a news report about a woman’s death</a></li></ul><p>AI SAFETY SUMMIT</p><ul><li><a href=\"https://ow.ly/W2tm50Q6ARV\" rel=\"noopener noreferrer\" target=\"_blank\">World leaders are gathering at the U.K.’s AI Summit. Doom is on the agenda.</a></li><li><a href=\"https://ow.ly/839f50Q6AO2\" rel=\"noopener noreferrer\" target=\"_blank\">Dire warnings dominate world’s first AI Safety Summit</a></li><li><a href=\"https://ow.ly/9HG650Q6AQ0\" rel=\"noopener noreferrer\" target=\"_blank\">AI Safety Summit | AISS 2023</a></li><li><a href=\"https://ow.ly/4nFj50Q6AOh\" rel=\"noopener noreferrer\" target=\"_blank\">AI Safety Summit: introduction</a></li><li><a href=\"https://ow.ly/bUsi50Q6AOo\" rel=\"noopener noreferrer\" target=\"_blank\">Britain publishes 'Bletchley Declaration' on AI safety</a></li><li><a href=\"https://ow.ly/tHWc50Q6ANv\" rel=\"noopener noreferrer\" target=\"_blank\">The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023</a></li><li><a href=\"https://ow.ly/xLrS50Q6ANV\" rel=\"noopener noreferrer\" target=\"_blank\">Analysis: AI summit a start but global agreement a distant hope</a></li><li><a href=\"https://ow.ly/yyNz50Q6AOt\" rel=\"noopener noreferrer\" target=\"_blank\">Why China’s Involvement in the U.K. AI Safety Summit Was So Significant</a></li><li><a href=\"https://ow.ly/aMNH50Q6ANI\" rel=\"noopener noreferrer\" target=\"_blank\">Five takeaways from UK’s AI safety summit at Bletchley Park</a></li><li><a href=\"https://ow.ly/wyAK50Q6APf\" rel=\"noopener noreferrer\" target=\"_blank\">AI Safety Summit by Kevin Kallaugher (KAL)</a> of <a href=\"https://www.kaltoons.com/\" rel=\"noopener noreferrer\" target=\"_blank\">Kaltoons</a></li><li><a href=\"https://ow.ly/eMaa50Q6AOI\" rel=\"noopener noreferrer\" target=\"_blank\">Bletchley Park: Five facts about the UK's AI Safety Summit venue</a></li></ul><p>AI SAFETY PRIMERS</p><ul><li><a href=\"https://ow.ly/a5b650Q6AS8\" rel=\"noopener noreferrer\" target=\"_blank\">What is AI Safety?</a></li><li><a href=\"https://ow.ly/H0Hm50Q6ASc\" rel=\"noopener noreferrer\" target=\"_blank\">Key Concepts in AI Safety: An Overview</a></li><li><a href=\"https://ow.ly/Tkjg50Q6ASg\" rel=\"noopener noreferrer\" target=\"_blank\">A Primer on AI Safety</a></li></ul>","author_name":"Tasia Custode, Tristan Jutras"}