{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/683f0798c966cde736234a29/6841ad6928f64e4b9926e6c0?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"How AI Can Accelerate Science & Its Own Adoption with Niklas Lundblad","description":"<p>In this episode of the Existential Hope Podcast, Niklas Berild Lundblad, a philosopher, researcher, and former policy lead at Google DeepMind, Google, and Stripe, explores the interplay between progress, complexity, and the transformative potential of artificial intelligence.</p><p><br></p><p>Niklas discusses why asking the right questions is crucial for navigating our future, especially as AI challenges our self-perception and introduces new forms of complexity. He discusses the \"soft narcissism\" in AI development, the distinction between AI and AGI, and why we should view current AI not as a mirror, but as a strange, exotic artifact whose full capabilities we are still underestimating. </p><p><br></p><p>In this conversation, we explore:</p><ul><li>The critical relationship between progress and complexity, and why managing this dynamic is essential for societal growth (including the \"Red Queen effect\").</li><li>Why current AI developments feel different from past tech hypes.</li><li>The potential for AI to revolutionize scientific discovery.</li><li>How AI could accelerate its own diffusion.</li><li>The need for curious regulators, mechanisms for change, the challenges of agentic AI, and how cultural biases might affect our approaches to regulation.</li><li>The Solow Paradox and the Gartner Hype Cycle as frameworks for understanding technology adoption.</li></ul><p><br></p>","author_name":"Foresight Institute"}