Share

cover art for How the whole world can exceed Swiss living standards by 2100 (backed by data)

The Existential Hope Podcast

How the whole world can exceed Swiss living standards by 2100 (backed by data)

What would the world look like if the poorest country was as rich as Switzerland is today? It turns out we could actually see it happen by 2100, and with an economic growth that is similar to the one we have been experiencing for the past 20 years.

In this episode, we talk with Marc Canal, Senior Fellow at the McKinsey Global Institute, and co-author of the book A Century of Plenty. We unpack what a hundred years of data tells us about human progress, and map out the steps to an ambitious scenario we can build by the end of the century.

We discuss:

  • How much the world has actually changed since 1925: from one in five children dying before age five in Spain, to life expectancy growing by 40 years globally.
  • What it would take to make today’s Swiss living standards the world’s floor by 2100 (while richer countries grow far beyond it), from energy efficiency to birth rates and geopolitics.
  • How data shows economic growth is actually good for the climate and for human happiness.
  • Why achieving a prosperous world currently depends more on our collective belief that progress is possible than on resource constraints.
  • How you can thrive in an AI world, where 57% of work hours can be automated, by leaning into the “messy” jobs.


Timestamps:

0:00 - Cold open

1:54 - Why the McKinsey Global Institute wrote “A Century of Plenty” 

5:20 - What was the world like in 1925? 

10:04 - The most surprising stats from 100 years of progress

16:03 - Defining the “empowerment line” vs. the poverty line

19:30 - Projecting 2100: can we make Switzerland the global "floor"? 

22:26 - The 5 conditions for achieving a world of plenty

26:14 - Can we grow the economy without sacrificing the environment?

28:23 - Economic growth vs. climate change: mitigation and adaptation 

34:05 - What are the biggest challenges to the “progress machine”? 

36:30 - The demographic crisis, and solving falling fertility rates

45:20 - Will AI speed up human innovation?

48:21 - Geopolitics: is the world really de-globalizing? 

52:30 - The crisis of hope: why are we so pessimistic?

56:26 - How different nations reach the frontier of progress

58:49 - Building a new culture of growth

1:01:09 - Does economic progress actually make us happier?

1:05:39 - How you can help make a century of plenty probable

On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures.


Full transcript, listed resources, and more: https://www.existentialhope.com/podcasts


Follow on X.

More episodes

View all episodes

  • How dating an AI could improve your real love life | David Eagleman

    50:34|
    Having an AI boyfriend or girlfriend might seem creepy, but what if it helped you get better at human relationships? In this episode, we talk with David Eagleman, a professor of neuroscience at Stanford, bestselling author, and science communicator. We discuss how AI and other technologies can help us become better humans – wiser, kinder and more empathetic, not just more productive. We get a neuroscientist’s take on how human and artificial intelligence interact, including:How to use AI to better understand other people and improve our relationships.Using debate AIs in schools to make younger generations better at critical thinking and grasping both sides of an argument.Is AI making our lives too easy by removing the friction we need to learn?Technologies that could expand what’s possible with our brain, from mind uploading to brain-to-brain communication.Timestamps:0:00 Cold open1:38 How David Eagleman became a neuroscientist4:46 How malleable is the brain?6:29 Can AI make us better humans? The Reddit debate bot experiment11:00 AI relationships and becoming better at dating real people14:24 Using AI to hear his late father's voice again18:26 Mind uploading and digital immortality23:27 What technology could make us more kind and empathetic24:04 How AI could revolutionize debate education and critical thinking28:30 Why AI needs a "tough love" mode to help us grow30:17 Does AI making life easier rob us of useful friction for learning?34:21 Why brain-to-brain communication probably won't help us understand each other37:29 Could neurotechnology let us experience the world as another species?41:58 The current state of neuroscience and where it's heading48:05 How to get started if you're inspired by this conversation
  • How your personal moral compass helps you build a better world | SJ Beard

    01:25:32|
    To make the future go well, we might not need a perfect model for its end state, or an abstract philosophical theory to guide us. Can your own sense of “the right thing to do” actually help make the world better?In this episode we talk with SJ Beard, researcher at the Centre for the Study of Existential Risk, and author of the book “Existential Hope”.Some of the topics we discuss:How to shift our focus from "preventing the end of the world" to actively building a future worth living.Why aiming for a “happy ever after” state of the world might be dangerous, and why improving the world one generation at a time is less likely to backfire.Relying on our own sense of “the right thing to do” as a practical guide to make the world better.Why decisions about AI and global risk need input from a broad mix of people and their real-world experiences, not just experts at the top.Why building AI with compassion and curiosity about human values may be safer than giving it a rigid list of rules to follow.Timestamps:[01:31] SJ’s background in philosophy and existential risk[02:02] Why write a book on existential hope?[04:43] Defining existential hope, and its relationship with existential risks and existential anxiety[11:09] Human agency without the guilt[13:59] Why there are no truly "natural" disasters[16:49] Why we shouldn’t try to build a perfect utopia[19:05] Protopia: is iterative improvement enough?[22:19] Defining progress: what does it mean to "get better"?[26:13] Protopia vs. viatopia: setting goals and achieving a great future[29:48] Existential safety as a collective project[35:06] Using participatory tools to make global decisions[36:32] Making existential hope reasonably demanding[40:06] Can we achieve systemic change in a tech-focused world?[46:00] Concrete socio-technical projects for AI safety[49:02] Aligning AI by building its character[51:45] The importance of history in building a good future[54:24] Key 17th-century ideas that are shaping modern society[58:20] Cultivating "humanity as a virtue"[01:04:37] Lessons from nuclear near-misses: the example of Petrov[01:09:20] The trade-offs of a humanistic, bottom-up approach to decision-making[01:12:16] Literacy vs. orality: how ideas become simplified[01:16:45] Meme culture and the transmission of deep context[01:18:48] How writing the book changed SJ’s mind[01:21:38] SJ Beard’s vision for existential hope
  • Raising science ambition: how to identify the highest-impact research for an AI world | Anastasia Gamick

    49:25|
    Most scientists do “safe” research to secure their next grant. But what if more of them worked on the most important problems instead?In this episode, we talk with Anastasia Gamick, co-founder of Convergent Research, about how to raise our level of ambition for what science can actually achieve.Convergence Research incubates Focused Research Organizations: small, startup-style teams that build critical “public good” tech, which both academia and for-profits ignore.We discuss:What makes a research project truly high-impact in view of an AI worldConcrete examples of these projects: maps of brain synapses, software that’s provably safe, drug screening, good data for AI-powered scientific research, and moreHow to prioritize defensive technology, such as biosafety tools, instead of just pushing every frontier as fast as possibleHow young scientists can find the work that matters most for the future[00:00] Cold open[01:52] Introducing Anastasia Gamick and the mission of Convergent Research[02:44] Defining Focused Research Organizations (FROs) and their unique characteristics[09:46] Backcasting from 2075: what research to prioritize now to prepare for the intelligence age[19:08] The four types of projects Convergent decides not to fund[25:35] Biological and ecological dark matter: why we need better datasets for AI science[28:28] Why academia and industry aren’t incentivized to build tech capabilities for the public good[29:32] Defining “moonshot projects”: how boring drug screening creates massive downstream impact[32:56] The future of neuroscience: capturing videos of synapses firing[35:46] How the FRO model is catching on internationally[36:25] Steering vs. accelerating: selecting defense-dominant technology[41:22] Increasing human agency and how scientists can choose high-impact research areas[46:51] The evolution of scientific funding and the role of new philanthropy[48:05] Finding existential hope in the community of future-builders
  • Jason Crawford on how technology expands human choice and control

    01:01:07|
    Our fast-paced world isn’t spinning out of our control; we’re actually becoming more capable of steering it than ever before. Throughout history, technological progress has expanded human agency, that is our ability to choose our destiny rather than being subject to the whims of nature.Jason Crawford, founder of the Roots of Progress Institute, joins the podcast to discuss The Techno-Humanist Manifesto, his book exploring his philosophy of progress centered around human life and wellbeing. In our conversation, we dive into the core arguments of the manifesto:How we are more in control of our lives than ever beforeWhy we should reframe the goal of “stopping climate change” into “controlling climate change” and work toward installing a “thermostat for the Earth”The value of nature and its interaction with humanityAllowing ourselves to celebrate human achievement and industrial civilizationThe concept of “solutionism”, as a kind of optimism that acknowledges risks while keeping a proactive attitude towards solving problemsWhy two common fears around the slowing of progress – that we could run out of natural resources or new ideas – are actually unfoundedThe possibility that AI represents a transformation as significant as the Industrial Revolution or the invention of agricultureHow to rebuild a culture of progress in the 21st century, from reforming scientific institutions to creating new, non-dystopian science fictionChapters:[00:00] Cold open[01:30] Intro: Jason Crawford and the Techno-Humanist Manifesto[04:10] Defining progress as the expansion of human agency[06:16] How to use our newfound agency to live a meaningful life[10:07] Climate control: installing a “thermostat” for the Earth[13:26] Anthropocentrism and the value of nature[19:41] Ode to man: celebrating human achievement[20:53] Solutionism: believing in our problem-solving abilities to tackle risks[26:26] Why pessimism sounds smart but misses the solution space[31:29] The myth of finite natural resources and the power of knowledge[34:27] Why we are getting better at finding ideas faster than they get harder to find[39:03] The Intelligence Age: a new mode of production[41:19] Amplifying human agency in an AI-driven world[43:09] Developing a healthy relationship with AI and attention[46:28] The culture of progress and why we soured on the future[50:10] Building the infrastructure for a global progress movement[53:54] A 20-year vision for progress studies in the mainstream[57:33] High-leverage regulations for progress: from nuclear to supersonic flight[58:57] Jason Crawford’s existential hope vision
  • Elle Griffin on researching the ideal society, from utopian books to real-world examples

    01:09:51|
    While dystopian fiction dominates our screens and bookshelves, Elle Griffin is busy researching how things might actually go right. She wanted to write a utopian novel and realized she needed a better understanding of what an ideal society could look like. In our conversation, we discuss how her favorite utopian literature influenced her views on a well-designed society. But we also explore practical ideas on how we could improve our systems:Tax autonomy: Why giving states and cities the power to collect their own taxes would allow them to fund the specific services their citizens actually want.A la carte federations: A model where cities and states choose to join specific agreements, like a "fishing EU" or a "healthcare EU," instead of being forced into one large, centralized government that manages every aspect of life.The Mondragon model: What we can learn from a massive network of worker-owned cooperatives in Spain that provides its own unemployment insurance and university.Who should control AI: Why giving voting authority to the employees who write the code (rather than investors or nonprofit boards) might be the best way to prevent unethical shortcuts.Singapore’s land model: How the government acts as a landlord to fund public services, allowing for lower income taxes while still providing universal social support.Fixing the Internet: How to use personal data and AI to make us wiser, rather than letting algorithms push us toward fast fashion and political radicalization.Chapters:Cold open (00:00:00)Introducing Elle Griffin (00:01:27)How writing a novel turned into a research project (00:02:27)Elle’s current work: From print pamphlets to "We Should Own the Economy" (00:04:21)The setup of Elle’s upcoming utopian novel (00:05:06)From gothic literature to utopian literature (00:06:30)Three classic utopian novels and their recurring lessons (00:15:42)Building a "future Asia" through mythology and technology (00:22:02)What if US States had the same autonomy as EU countries? (00:23:49)"A la carte" federalism: moving toward a modular government (00:28:11)The Mondragon model: a blueprint for worker-owned economies (00:32:54)Why the smallest government is the best government (00:36:18)The global monoculture and the rise of micro-cultures (00:44:29)Who should control AI? The case for employee-led governance (00:53:02)Fixing the Internet and using AI to make us wise, not just efficient (01:01:06)Why Victor Hugo’s "Les Misérables" is the ultimate masterpiece (01:06:14)An existential hope vision for the future (01:08:09)
  • Andrew Critch on what AGI might look like in practice

    01:03:00|
    When people think about AGI, most of them ask “When is it going to arrive?” or “What kind of AGI will we get?”. Andrew Critch, AI safety researcher and mathematician, argues that the most important question is actually “What will we do with it?”In our conversation, we explore the importance of our choices in the quest to make AGI a force for good. Andrew explains what AGI might look like in practical terms, and the consequences of it being trained on our culture. He also claims that finding the “best” values AI should have is a philosophical trap, and that we should instead focus on finding a basic agreement about “good” vs. “bad” behaviors.The episode also covers concrete takes on the transition to AGI, including: Why an advanced intelligence would likely find killing humans “mean.”How automated computer security checks could be one of the best uses of powerful AI.Why the best preparation for AGI is simply to build helpful products today.
  • Anna Gát on creating communities that connect, even when people disagree

    44:44|
    Anna Gát, founder of the Interintellect community, joins us to explore the essential role of hopeful action and diverse communities in shaping the future. Anna shares why she started Interintellect as a space for intellectual inquiry free from political polarization and traditional gatekeeping, driven by the hope that constructive social collaboration is possible. She details the specific rules of gathering and hosting that can make online and offline groups successful, fostering deep, non-toxic, and life-changing conversations across polarizing topics.We also dive into the genesis of Anna's own podcast, The Hope Axis, and her frustration with the prevalent "complaint culture" and regressive narratives in wealthy societies. The conversation also touches on these questions:Why should communities be given a clear "job" to increase their longevity?How can we achieve diversity of thought in tight-knit groups?Why is constantly networking (with a finite-game approach) detrimental to human well-being?What does it mean to be a "realistic optimist"?How can we architecturally ensure that future AI serves groups and supports humans as social creatures, rather than further enabling solitary, hyper-addictive entertainment?
  • Isabelle Boemeke on what everyone gets wrong about nuclear energy

    54:56|
    Nuclear energy has a reputation problem. Despite being one of the safest and most reliable clean-energy technologies ever developed, public perception is dominated by a handful of accidents, Cold War imagery, and decades of political resistance. Isabelle Boemeke, model-turned-science-communicator and author of Rad Future, argues that this disconnect is not only irrational, but actively dangerous for humanity’s prospects.In this episode, Isabelle explains how nuclear became one of the most misunderstood technologies of the last century, why fears about waste, safety, and proliferation are often overstated, and what the data actually shows about nuclear relative to fossil fuels, hydropower, and renewables. She also talks about her unusual path to becoming the first “nuclear influencer,” why she thinks communication and aesthetics matter just as much as engineering, and why abundant, cheap energy is central to improving global living standards.Beyond nuclear itself, the conversation touches on broader questions:• Why are young people increasingly pessimistic about the future?• What explains the rise of degrowth thinking in wealthy countries?• How does meaning shift in a world where technology automates more of life?• And what would it take for the U.S. and Europe to build again at the pace of China?‍This special episode was recorded at the 2025 Progress Conference. Enormous thanks to Roots of Progress for organizing the event, and to Lighthaven for providing the podcast studio.