Share

cover art for Casey Handmer | Why Mars is Hard

The Foresight Institute Podcast

Casey Handmer | Why Mars is Hard

Season 1

Casey Handmer is the founder of Terraform Industries, a company dedicated to gigascale atmospheric hydrocarbon synthesis. His method of creating hydrocarbons promises to be a cheaper and cleaner alternative to traditional oil wells.


Known for his work as an astrophysicist, innovator, and Mars aficionado, Casey is eager to guide us through the intricate landscape of Martian exploration. This presentation explores the scientific, technological, and logistical hurdles that make Mars such a challenging destination. Casey offers an insider’s perspective on our journey towards making human life sustainable on Mars.


Dive deeper into the session: Full Summary


About Foresight Institute

Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.


Allison Duettmann

The President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".


Get Involved with Foresight:


Follow Us: Twitter | Facebook | LinkedIn


Note: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine.


More episodes

View all episodes

  • Siméon Campos | Governing AI for Good

    52:15||Season 2
    SpeakerSiméon Campos is president and founder of SaferAI, an organization working on developing the infrastructure for general-purpose AI auditing and risk management. He worked on large language models for the last two years and is highly committed to making AI safer.Session Summary“I think safe AGI can both prevent a catastrophe and offer a very promising pathway into a eucatastrophe.”This week we are dropping a special episode of the Existential Hope podcast, where we sit down with Siméon Campos, president and founder of Safer AI, and a Foresight Institute fellow in the Existential Hope track. Siméon shares his experience working on AI governance, discusses the current state and future of large language models, and explores crucial measures needed to guide AI for the greater good.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm.
  • Dean Woodley Ball | Humanity’s Next Leap: Thoughts on the Frontiers of Neural Technology

    49:56|
    Speaker Dean Woodley Ball is a Research Fellow at George Mason University’s Mercatus Center and author of the Substack Hyperdimensional. His work focuses on artificial intelligence, emerging technologies, and the future of governance. Previously, he was Senior Program Manager for the Hoover Institution's State and Local Governance Initiative. Key HighlightsBased on engagement with the neuroscience and machine learning literatures, this talk will focus on how technologies such as virtual reality, large language models, AI agents, neurostimulation, and neuromonitoring may converge in the coming decade into the first widespread consumer neural technology. The talk will focus on technical feasibility, public policy, and broader societal implications.  In terms of the challenge, I think the big one for me is probably building the datasets we’ll need for the foundational AI models undergirding all of this.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply: Virtual Salons & in-person WorkshopsDonate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine.
  • Dana Watt | A Neuroscientist's Guide to Starting a Company

    52:09||Season 1
    Dr. Watt is an investment associate at Ascension Ventures, an investment firm specializing in healthcare technology. She previously co-founded and served as CSO of Pro-Arc Diagnostics, a personalized medicine company operating in St. Louis.Key HighlightsWatt discusses her career journey and insights into venture capital investing in neuroscience and neurotech companies. She explains her role as a VC, which involves making profitable investments, underwriting risk, and structuring deals. Dana highlights key attributes of venture-backable companies, such as exceptional teams, large addressable markets, defensibility, and differentiation. She also discusses challenges and biases in neuroscience investing, including the complexity of brain science, hardware difficulties, long clinical timelines, and subtle readouts.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply: Virtual Salons & in-person WorkshopsDonate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine.
  • 28. Existential Hope Podcast: James Pethokoukis | Conservatism Meets Futurism

    51:16||Season 2, Ep. 28
    James Pethokoukis is a senior fellow and the DeWitt Wallace Chair at the American Enterprise Institute, where he analyzes US economic policy, writes and edits the AEIdeas blog, and hosts AEI’s Political Economy podcast. He is also a contributor to CNBC and writes the Faster, Please! newsletter on Substack. He is the author of The Conservative Futurist: How to Create the Sci-Fi World We Were Promised (Center Street, 2023). He has also written for many publications, including the Atlantic, Commentary, Financial Times, Investor’s Business Daily, National Review, New York Post, the New York Times, USA Today, and the Week. Session SummaryIn this episode, James joins us to discuss his book, The Conservative Futurist, and his perspectives on technology and economic growth. James explores his background, the spectrum of 'upwing' (pro-progress) versus 'downwing' (anti-progress), and the role of technology in solving global challenges. He explains his reasoning for being pro-progress and pro-growth as well as highlighting the importance of positive storytelling and education in developing a more advanced and prosperous world.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm.
  • Existential Hope: The Flourishing Foundation at the Transformative AI Hackathon

    58:43||Season 2
    The Flourishing FoundationIn February 2024, we partnered with the Future of Life Institute on a hackathon to design institutions that can guide and govern the development of AI. The winner of the hackathon was the Flourishing Foundation, who are focused on our relationship with AI and other emerging technologies. They challenge innovators to envision and build life-centered products, services, and systems, specifcially, to enable TAI-enabled consumer technologies to promote human well-being by developing new norms, processes, and community-driven ecosystems.At their core, they explore the question of "Can AI make us happier?"Connect: https://www.flourishing.foundation/Read about the hackathon: https://foresight.org/2024-xhope-hackathon/Existential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm.
  • 27. Existential Hope Podcast: Roman Yampolskiy | The Case for Narrow AI

    47:08||Season 2, Ep. 27
    Dr Roman Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. There he was a recipient of a four year National Science Foundation IGERT (Integrative Graduate Education and Research Traineeship) fellowship. His main areas of interest are behavioral biometrics, digital forensics, pattern recognition, genetic algorithms, neural networks, artificial intelligence and games, and he is an author of over 100 publications including multiple journal articles and books.Session SummaryWe discuss everything AI safety with Dr. Roman Yampolskiy. As AI technologies advance at a breakneck pace, the conversation highlights the pressing need to balance innovation with rigorous safety measures. Contrary to many other voices in the safety space, argues for the necessity of maintaining AI as narrow, task-oriented systems: “I'm arguing that it's impossible to indefinitely control superintelligent systems”. Nonetheless, Yampolskiy is optimistic about narrow AI future capabilities, from politics to longevity and health. Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm.
  • Existential Hope Worldbuilding: 1st place | Cities of Orare

    54:29||Season 2
    This episode features an interview with the 1st place winners of our 2045 Worldbuilding challenge! Why Worldbuilding?We consider worldbuilding an essential tool for creating inspiring visions of the future that can help drive real-world change. Worldbuilding helps us explore crucial 'what if' questions for the future, by constructing detailed scenarios that prompt us to ask: What actionable steps can we take now to realize these desirable outcomes?Cities of Orare – our 1st place winnersCities of Orare imagines a future where AI-powered prediction markets called Orare amplify collective intelligence, enhancing liberal democracy, economic distribution, and policy-making. Its adoption across Africa and globally has fostered decentralized governance, democratizing decision-making, and spurring significant health and economic advancements.Read more about the 2045 world of Cities of Orare: https://www.existentialhope.com/worlds/beyond-collective-intelligence-cities-of-orareAccess the Worldbuilding Course: https://www.existentialhope.com/existential-hope-worldbuildingExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm.
  • Christian Schroeder de Witt | Secret Collusion Among Generative AI Agents: Toward Multi-Agent Security

    49:22||Season 1
    Christian is a researcher in foundational AI, information security, and AI safety, with a current focus on the limits of undetectability. He is a pioneer in the field of Multi-Agent Security (masec.ai), which aims to overcome the safety and security issues inherent in contemporary approaches to multi-agent AI. His recent works include a breakthrough result on the 25+ year old problem of perfectly secure steganography (jointly with Sam Sokota), which was featured by Scientific American, Quanta Magazine, and Bruce Schneier’s Security Blog. Key Highlights How do we design autonomous systems and environments in which undetectable actions cannot cause unacceptable damages? He argues that the ability of advanced AI agents to use perfect stealth will soon be AI Safety’s biggest concern. In this talk, he focuses on the matter of steganographic collusion among generative AI agents.About Foresight InstituteForesight Institute is a non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply: Virtual Salons & in-person WorkshopsDonate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine.
  • Existential Hope Worldbuilding: 2nd Place | Rising Choir

    51:36||Season 2
    This episode features an interview with the 2nd place winners of our 2045 Worldbuilding challenge! Why Worldbuilding?We consider worldbuilding an essential tool for creating inspiring visions of the future that can help drive real-world change. Worldbuilding helps us explore crucial 'what if' questions for the future, by constructing detailed scenarios that prompt us to ask: What actionable steps can we take now to realize these desirable outcomes?Rising Choir – our 2nd place winnersRising Choir envisions a 2045 where advanced AI and robotics are seamlessly integrated into everyday life, enhancing productivity and personal care. The V.O.I.C.E. system revolutionizes communication and democratic participation, developing a sense of inclusion across all levels of society. Energy abundance, driven by solar and battery advancements, addresses climate change challenges, while the presence of humanoid robots in every household marks a new era of economic output and personal convenience. Read more about the 2045 world of Rising Choir: https://www.existentialhope.com/worlds/rising-choir-a-symphony-of-clashing-voicesAccess the Worldbuilding Course: https://www.existentialhope.com/existential-hope-worldbuildingExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm.