Latest episode

18. Episode 18 - Why AI needs a new kind of supercomputer network
37:38||Ep. 18Training frontier models isn’t as simple as adding more GPUs—one small problem and the whole coordinated dance falls apart. OpenAI’s Mark Handley and Greg Steinbrecher discuss how a new supercomputer network design, used to train some of the company’s latest models, keeps the whole system moving in lockstep, even with record numbers of GPUs. They break down Multipath Reliable Connection, a new protocol OpenAI developed with AMD, Broadcom, Intel, Microsoft, and Nvidia, and why they’re making it available for the whole industry to use.Chapters00:00 Intro00:39 Greg and Mark's paths to OpenAI04:34 Why training AI stresses networks differently10:05 Bottlenecks, failures, and the cost of waiting15:19 How Multipath Reliable Connection works18:59 A protocol to route around failures25:05 Why OpenAI is making MRC an open standard35:09 Could AI compute move to space?
More episodes
View all episodes

17. Episode 17 - What happens now that AI is good at math?
43:28||Ep. 17Math is one of the clearest ways to see how far AI has come in a short span. OpenAI researchers Sébastien Bubeck and Ernest Ryu join host Andrew Mayne to explain what changed and what it could mean for the future of research. They reflect on how Ernest used ChatGPT to help solve a 42-year-old open problem, the difference between deep literature search and original mathematical discovery, and what changes when AI can work over longer timelines. Chapters01:27 The surprising progress of AI’s math capabilities 03:01 Solving an open problem with ChatGPT06:57 How models went from basic math to research level11:32 Why math matters for AGI14:26 AI and the Erdős problems21:26 Building an automated researcher28:19 The role of humans as models improve33:52 Verifying proofs with AI36:00 The risk of shallow understanding41:19 Advice for learning math with ChatGPT
16. Episode 16 - Building AI for Life Sciences
44:25||Ep. 16What does it take to build AI systems that can actually help scientists? Research lead Joy Jiao and product lead Yunyun Wang discuss how OpenAI is developing models for life sciences and what responsible deployment means in a field with real biosecurity stakes. They explore how AI is already improving research workflows and where it could lead in drug discovery and more autonomous labs — including why a future with less pipetting sounds pretty good to most scientists.Chapters0:39 Introducing the Life Sciences model series3:47 Joy’s path into life sciences5:00 Autonomous lab with Ginkgo Bioworks7:27 Yunyun’s path into life sciences8:12 OpenAI’s life sciences work9:48 Biorisk, access, and safeguards15:43 What models can do in the lab17:51 Building scientific infrastructure20:14 Why compute matters for science24:54 Where are we in 6-12 months?29:51 Scientific adoption and skepticism33:17 Advice for students and researchers40:27 Where are we in 10 years?
15. Episode 15 - Inside the Model Spec
37:26||Ep. 15The more AI can do, the more we need to ask what it should and shouldn’t do. In this episode, OpenAI researcher Jason Wolfe joins host Andrew Mayne to talk about the Model Spec, the public framework that defines intended model behavior. They discuss how the Model Spec works in practice, including how the chain of command handles conflicts between instructions, and how OpenAI evolves it based on feedback, real-world use, and new model capabilities.Chapters00:00 Introduction01:10 What is the Model Spec?03:55 How does the Model Spec work in practice?06:26 Transparency: Where to read the Model Spec & give feedback07:51 How did the Model Spec originate?10:02 How does the spec translate into model behavior?11:26 What is the hierarchy / chain of command?13:35 Handling edge cases like Santa Claus17:41 How does the Model Spec evolve over time?19:59 What happens when models disagree with the spec?22:05 How do smaller models follow the spec?23:16 Is chain-of-thought useful for alignment?24:16 Model Spec vs Anthropic’s Constitution26:28 What surprised you most?26:56 How do you define the scope of the spec?27:44 What is the future of the Model Spec?31:16 How should developers think about the spec?34:44 Asimov’s laws vs Model Spec37:16 Could AI write a Human Spec?
14. Episode 14 - Building AI for better healthcare
30:54||Ep. 14Healthcare systems around the world are under strain, and both patients and clinicians are feeling the impact. OpenAI's Head of Health Dr. Nate Gross and Karan Singhal, who leads Health AI Research, discuss how AI can help address the biggest challenges. They cover how OpenAI is training models to handle sensitive health questions in collaboration with physicians, and how that foundation is unlocking a new generation of tools for patients, clinicians, and healthcare systems.Chapters00:00:38 – Origins of Nate and Karan’s interest in AI and healthcare00:05:01 – Strategy for building AI tools for clinicians00:06:57 – How AI models are trained for health use cases00:10:15 – How OpenAI is able to score well on health evals00:14:21 – Key challenges deploying AI in healthcare00:21:05 – Collaboration with hospitals and healthcare systems00:23:05 – Practical everyday uses of AI health assistants00:26:43 – Biggest “wow” moment during development00:28:46 – Feedback from clinicians and early users
13. Episode 13 - The Thinking Behind Ads in ChatGPT
25:34||Ep. 13How should advertising work in an AI product? Asad Awan, one of the ad leads at OpenAI, walks through how the company is approaching this decision and why it’s testing ads in ChatGPT at all. He explains how ads are built to stay separate from the model response, keep conversations with ChatGPT private from advertisers, and give people control over their experience.Chapters00:00:29 — Mission and principles00:04:01 — Separation between ads and answers00:07:31 — Who will see ads00:08:52 — Internal input and decision-making process00:11:06 — Controls and how ads will work00:15:53 — Guardrails for sensitive conversations00:17:33 — Skepticism about ads00:20:26 — Helping small businesses00:24:13 — Future of ads
12. Episode 12 - State of the AI Industry
49:41||Ep. 12OpenAI CFO Sarah Friar and Khosla Ventures founder Vinod Khosla argue the greatest challenges in AI right now are keeping up with demand and making sure more people get the benefit. They unpack what's driving big investments in compute and why this moment is different from other technology cycles — with meaningful advances in health, agents, and robotics still ahead. Chapters00:00:00 — What’s the AI story of 2026?00:07:28 — AI in healthcare00:12:01 — Scaling compute to match revenue00:18:05 — Difference between now and dot-com bubble00:27:41 — Ads in ChatGPT00:30:05 — Will consumers have more than one AI subscription?00:36:41 — Winning in enterprise00:39:44 — How can startups succeed?00:44:05 — Robotics and beyond