Share

cover art for Jason Crawford, Roots of Progress | Sustainable Technological Futures | Xhope Worldbuilding Course

The Foresight Institute Podcast

Jason Crawford, Roots of Progress | Sustainable Technological Futures | Xhope Worldbuilding Course

Jason Crawford is the founder of The Roots of Progress, where he writes and speaks about the history of technology and the philosophy of progress. Previously, he spent 18 years as a software engineer, engineering manager, and startup founder.


Worldbuilding Course

This session is a part of THE WORLDBUILDING CHALLENGE: CO-CREATING THE WORLD OF 2045. In this virtual and interactive course, we engage with the most pressing global challenges of our age—climate change, the risks of AI, and the complex ethical questions arising in the wake of new technologies. Our aim is to sharpen participants’ awareness and equip them to apply their skills to these significant and urgent issues.


Existential Hope

Existential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.


Hosted by Allison Duettmann and Beatrice Erkers


Follow Us: Twitter | Facebook | LinkedIn | Existential Hope Instagram


Explore every word spoken on this podcast through Fathom.fm.

More episodes

View all episodes

  • Existential Hope Worldbuilding: 1st place | Cities of Orare

    54:29
    This episode features an interview with the 1st place winners of our 2045 Worldbuilding challenge! Why Worldbuilding?We consider worldbuilding an essential tool for creating inspiring visions of the future that can help drive real-world change. Worldbuilding helps us explore crucial 'what if' questions for the future, by constructing detailed scenarios that prompt us to ask: What actionable steps can we take now to realize these desirable outcomes?Cities of Orare – our 1st place winnersCities of Orare imagines a future where AI-powered prediction markets called Orare amplify collective intelligence, enhancing liberal democracy, economic distribution, and policy-making. Its adoption across Africa and globally has fostered decentralized governance, democratizing decision-making, and spurring significant health and economic advancements.Read more about the 2045 world of Cities of Orare: https://www.existentialhope.com/worlds/beyond-collective-intelligence-cities-of-orareAccess the Worldbuilding Course: https://www.existentialhope.com/existential-hope-worldbuildingExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm.
  • Christian Schroeder de Witt | Secret Collusion Among Generative AI Agents: Toward Multi-Agent Security

    49:22
    Christian is a researcher in foundational AI, information security, and AI safety, with a current focus on the limits of undetectability. He is a pioneer in the field of Multi-Agent Security (masec.ai), which aims to overcome the safety and security issues inherent in contemporary approaches to multi-agent AI. His recent works include a breakthrough result on the 25+ year old problem of perfectly secure steganography (jointly with Sam Sokota), which was featured by Scientific American, Quanta Magazine, and Bruce Schneier’s Security Blog. Key Highlights How do we design autonomous systems and environments in which undetectable actions cannot cause unacceptable damages? He argues that the ability of advanced AI agents to use perfect stealth will soon be AI Safety’s biggest concern. In this talk, he focuses on the matter of steganographic collusion among generative AI agents.About Foresight InstituteForesight Institute is a non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply: Virtual Salons & in-person WorkshopsDonate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine.
  • Existential Hope Worldbuilding: 2nd Place | Rising Choir

    51:36
    This episode features an interview with the 2nd place winners of our 2045 Worldbuilding challenge! Why Worldbuilding?We consider worldbuilding an essential tool for creating inspiring visions of the future that can help drive real-world change. Worldbuilding helps us explore crucial 'what if' questions for the future, by constructing detailed scenarios that prompt us to ask: What actionable steps can we take now to realize these desirable outcomes?Rising Choir – our 2nd place winnersRising Choir envisions a 2045 where advanced AI and robotics are seamlessly integrated into everyday life, enhancing productivity and personal care. The V.O.I.C.E. system revolutionizes communication and democratic participation, developing a sense of inclusion across all levels of society. Energy abundance, driven by solar and battery advancements, addresses climate change challenges, while the presence of humanoid robots in every household marks a new era of economic output and personal convenience. Read more about the 2045 world of Rising Choir: https://www.existentialhope.com/worlds/rising-choir-a-symphony-of-clashing-voicesAccess the Worldbuilding Course: https://www.existentialhope.com/existential-hope-worldbuildingExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm.
  • Evelyn Yehudit Bischof | Healthy Longevity Medicine in the Clinical Practice

    53:13
    Evelyne Yehudit Bischof, MD, MPH, FEFIM, FMH is Associate Professor at the Shanghai University of Medicine & Health Sciences, Chief Associate Physician of Internal Medicine and Oncology at the University Hospital Renji of Shanghai Jiao Tong University School of Medicine, and Emergency Medicine Physician at the Shanghai East International Medical Center. She is a specialist in Internal Medicine with a research focus on Artificial Intelligence and Digital Health. Key HighlightsLongevity medicine is an AI and data-driven field evolving from precision medicine, lifestyle medicine, and geroscience that aims to elongate patients' healthy lifespan. Using biomarkers of aging, aging clocks, and continuous data monitoring, longevity physicians can bring the patient's health from "within norms" to "optimal" or even best performance.About Foresight InstituteForesight Institute is a non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply: Virtual Salons & in-person WorkshopsDonate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine.
  • Existential Hope Worldbuilding: 3rd Place | FloraTech

    54:31
    This episode features an interview with the 3rd place winners of our 2045 Worldbuilding challenge! Why Worldbuilding?We consider worldbuilding an essential tool for creating inspiring visions of the future that can help drive real-world change. Worldbuilding helps us explore crucial 'what if' questions for the future, by constructing detailed scenarios that prompt us to ask: What actionable steps can we take now to realize these desirable outcomes?FloraTech – our 3rd place winnersIn the world of 2045, a network of bounded AI agents, imbued with robust ethical constraints and specialized capabilities, has become the backbone of a thriving, harmonious global society. These AI collaborators have unlocked unprecedented possibilities for localized, sustainable production of goods and services, empowering communities to meet their needs through advanced manufacturing technologies and smart resource allocation. Read more about the 2045 world of FloraTech: https://www.existentialhope.com/worlds/floratech-2045-co-evolving-with-technology-for-collective-flourishingAccess the Worldbuilding Course: https://www.existentialhope.com/existential-hope-worldbuildingExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm.
  • 26. Existential Hope: Stuart Buck | What is Good and Bad Science

    48:31
    SpeakerStuart Buck is the Executive Director of the Good Science Project, and a Senior Advisor at the Social Science Research Council. Formerly, he was the Vice President of Research at Arnold Ventures. His efforts to improve research transparency and reproducibility have been featured in Wired, New York Times, The Atlantic, Slate, The Economist, and more. He has given advice to DARPA, IARPA (the CIA’s research arm), the Department of Veterans Affairs, and the White House Social and Behavioral Sciences Team on rigorous research processes, as well as publishing in top journals (such as Science and BMJ) on how to make research more accurate.Session SummaryWorking in the field of meta-science, Stuart cares deeply about who gets funding and how, the engulfment of bureaucracy for researchers, everywhere, how we can fund more innovative science, ensuring results are reproducible and true, and much more. Among many things, he has funded renowned work showing that scientific research is often irreproducible, including the Reproducibility Projects in Psychology and Cancer Biology.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm.
  • Tom Burns, Cornell University | At the Interface of AI Safety and Neuroscience

    51:33
    SpeakerTom Burns graduated with First Class Honours in his Bachelor of Science from Monash University in 2013, exploring the mathematical features of human perception of non-linguistic sounds. Shifting to philosophy, he completed a Master of Bioethics in 2014, analyzing the ethics of euthanasia legislation in Australia, followed by a World Health Organization Bioethics Fellowship in 2015, contributing to the Ebola epidemic response. In 2023, as a Fall Semester Postdoc at Brown University's ICERM, he contributed to the 'Math + Neuroscience' program. Recently affiliated with Timaeus, an AI safety organization, Tom is continuing his research at Cornell University's new SciAI Center from March 2024.Session Summary Neuroscience is a burgeoning field with many opportunities for novel research directions. Due to experimental and physical limitations, however, theoretical progress relies on imperfect and incomplete information about the system. Artificial neural networks, for which perfect and complete information is possible, therefore offer those trained in the neurosciences an opportunity to study intelligence to a level of granularity which is beyond comparison to biological systems, while still relevant to them. Additionally, applying neuroscience methods, concepts, and theory to AI systems offers a relatively under-explored avenue to make headwind in the daunting challenges posed by AI safety — both for present-day risks, such as enshrining biases and spreading misinformation, and for future risks, including on existential scales. In this talk, Tom presents two emerging examples of interactions between neuroscience and AI safety. In the direction of ideas from neuroscience being useful for AI safety, he demonstrates how associative memory has become a tool for interpretability of Transformer-based models. In the opposite direction, he discusses how statistical learning theory and the developmental interpretability research program have applicability in understanding neuroscience phenomena, such as perceptual invariance and representational drift. Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm.
  • Robert Lempert, RAND | On the How & Why of Worldbuilding | Xhope Worldbuilding Course

    26:29
    Robert Lempert (Ph.D., Applied Physics, Harvard University) is a senior physical scientist at RAND. He is the director of the Frederick S. Pardee Center for Longer Range Global Policy and the Future Human Condition, a principal researcher at RAND, and a professor of policy analysis at the Pardee RAND Graduate School. His research focuses on risk management and decision-making under conditions of deep uncertainty. Lempert's work aims to advance the state of art for organizations managing risk in today's conditions of fast-paced, transformative, and surprising change and help organizations adopt these approaches to help make proper stewardship of the future more commonly practised.Worldbuilding CourseThis session is a part of THE WORLDBUILDING CHALLENGE: CO-CREATING THE WORLD OF 2045. In this virtual and interactive course, we engage with the most pressing global challenges of our age—climate change, the risks of AI, and the complex ethical questions arising in the wake of new technologies. Our aim is to sharpen participants’ awareness and equip them to apply their skills to these significant and urgent issues.Existential HopeExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm.