{"version":"1.0","type":"rich","provider_name":"Acast","provider_url":"https://acast.com","height":250,"width":700,"html":"<iframe src=\"https://embed.acast.com/$/683f0798c966cde736234a29/6995b529e1d8773119a5eeeb?\" frameBorder=\"0\" width=\"700\" height=\"250\"></iframe>","title":"How your personal moral compass helps you build a better world | SJ Beard","description":"<p>To make the future go well, we might not need a perfect model for its end state, or an abstract philosophical theory to guide us. Can your own sense of “the right thing to do” actually help make the world better?</p><p>In this episode we talk with SJ Beard, researcher at the Centre for the Study of Existential Risk, and author of the book “Existential Hope”.</p><p>Some of the topics we discuss:</p><ul><li>How to shift our focus from \"preventing the end of the world\" to actively building a future worth living.</li><li>Why aiming for a “happy ever after” state of the world might be dangerous, and why improving the world one generation at a time is less likely to backfire.</li><li>Relying on our own sense of “the right thing to do” as a practical guide to make the world better.</li><li>Why decisions about AI and global risk need input from a broad mix of people and their real-world experiences, not just experts at the top.</li><li>Why building AI with compassion and curiosity about human values may be safer than giving it a rigid list of rules to follow.</li></ul><p><br></p><p>Timestamps:</p><p>[01:31] SJ’s background in philosophy and existential risk</p><p>[02:02] Why write a book on existential hope?</p><p>[04:43] Defining existential hope, and its relationship with existential risks and existential anxiety</p><p>[11:09] Human agency without the guilt</p><p>[13:59] Why there are no truly \"natural\" disasters</p><p>[16:49] Why we shouldn’t try to build a perfect utopia</p><p>[19:05] Protopia: is iterative improvement enough?</p><p>[22:19] Defining progress: what does it mean to \"get better\"?</p><p>[26:13] Protopia vs. viatopia: setting goals and achieving a great future</p><p>[29:48] Existential safety as a collective project</p><p>[35:06] Using participatory tools to make global decisions</p><p>[36:32] Making existential hope reasonably demanding</p><p>[40:06] Can we achieve systemic change in a tech-focused world?</p><p>[46:00] Concrete socio-technical projects for AI safety</p><p>[49:02] Aligning AI by building its character</p><p>[51:45] The importance of history in building a good future</p><p>[54:24] Key 17th-century ideas that are shaping modern society</p><p>[58:20] Cultivating \"humanity as a virtue\"</p><p>[01:04:37] Lessons from nuclear near-misses: the example of Petrov</p><p>[01:09:20] The trade-offs of a humanistic, bottom-up approach to decision-making</p><p>[01:12:16] Literacy vs. orality: how ideas become simplified</p><p>[01:16:45] Meme culture and the transmission of deep context</p><p>[01:18:48] How writing the book changed SJ’s mind</p><p>[01:21:38] SJ Beard’s vision for existential hope</p>","author_name":"Foresight Institute"}