Share

Disseminate
Pat Helland | Scalable OLTP in the Cloud: What’s the BIG DEAL? | #50
In this thought-provoking podcast episode, we dive into the world of scalable OLTP (OnLine Transaction Processing) systems with the insightful Pat Helland. As a seasoned expert in the field, Pat shares his insights on the critical role of isolation semantics in the scalability of OLTP systems, emphasizing its significance as the "BIG DEAL." By examining the interface between OLTP databases and applications, particularly through the lens of RCSI (READ COMMITTED SNAPSHOT ISOLATION) SQL databases, Pat talks about the limitations imposed by current database architectures and application patterns on scalability.
Through a compelling thought experiment, Pat explores the asymptotic limits to scale for OLTP systems, challenging the status quo and envisioning a reimagined approach to building both databases and applications that empowers scalability while adhering to established to RCSI. By shedding light on how today's popular databases and common app patterns may unnecessarily hinder scalability, Pat sparks discussions within the database community, paving the way for new opportunities and advancements in OLTP systems. Join us as we delve into this conversation with Pat Helland, where every insight shared could potentially catalyze significant transformations in the realm of OLTP scalability.
Papers mentioned during the episode:
- Scalable OLTP in the Cloud: What’s the BIG DEAL?
- Autonomous Computing
- Decoupled Transactions
- Don't Get Stuck in the "Con" Game
- The Best Place to Build a Subway
- Building on Quicksand
- Side effects, front and center
- Immutability changes everything
- Is Scalable OLTP in the Cloud a solved problem?
You can find Pat on:
More episodes
View all episodes

29. Mateusz Gienieczko | AnyBlox: A Framework for Self-Decoding Datasets | #69
01:02:28||Season 6, Ep. 29In this episode of Disseminate: The Computer Science Research Podcast, host Dr. Jack Waudby is joined by Mateusz Gienieczko, PhD researcher at TU Munich and co-author of the VLDB Best Paper Award winning paper AnyBlox.They dive deep into a fundamental problem in modern data systems: why cutting-edge data encodings and file formats rarely make it from research into real-world systems — and how AnyBlox proposes a radical solution.Mateusz explains the core idea of self-decoding data, where datasets ship with their own portable, sandboxed decoders, allowing any database system to read any encoding safely and efficiently. Built on WebAssembly, AnyBlox bridges the long-standing gap between database research and practice without sacrificing performance, portability, or security.This episode is essential listening for database researchers, data engineers, system builders, and industry practitioners interested in the future of data formats, analytics performance, and making research matter in practiceLinks:Paper: https://www.vldb.org/pvldb/vol18/p4017-gienieczko.pdfGitHub: https://github.com/AnyBloxMat's Homepage: https://v0ldek.com/
28. Xiangyao Yu | Disaggregation: A New Architecture for Cloud Databases | #68
42:12||Season 6, Ep. 28In this episode of Disseminate: The Computer Science Research Podcast, host Jack Waudby sits down with Xiangyao Yu (UW–Madison), one of the leading voices shaping the next generation of cloud-native databases.We dive deep into disaggregation — the architectural shift transforming how modern data systems are built. Xiangyao breaks down:Why traditional shared-nothing databases struggle in cloud environmentsHow separating compute and storage unlocks elasticity, scalability, and cost efficiencyThe evolution of disaggregated systems, from Aurora and Snowflake through to advanced pushdown processing and new modular servicesHis team's research on reinventing core protocols like 2-phase commit for cloud-native environmentsReal-time analytics, HTAP challenges, and the Hermes architectureWhere disaggregation goes next — indexing, query optimizers, materialized views, multi-cloud architectures, and moreWhether you're a database engineer, researcher, or a practitioner building scalable cloud systems, this episode gives a clear, accessible look into the architecture that’s rapidly becoming the default for modern data platforms.Links:Xiangyao Yu's HomepageDisaggregation: A New Architecture for Cloud Databases [VLDB'25]
27. Navid Eslami | Diva: Dynamic Range Filter for Var-Length Keys and Queries | #67
46:50||Season 6, Ep. 27In this episode of Disseminate: The Computer Science Research Podcast, Jack sits down with Navid Eslami, PhD researcher at the University of Toronto, to discuss his award-winning paper “DIVA: Dynamic Range Filter for Variable Length Keys and Queries”, which earned Best Research Paper at VLDB.Navid breaks down how range filters extend the power of traditional filters for modern databases and storage systems, enabling faster queries, better scalability, and theoretical guarantees. We dive into:How DIVA overcomes the limitations of existing range filtersWhat makes it the “holy grail” of filtering for dynamic dataReal-world integration in WiredTiger (the MongoDB storage engine)Future challenges in data distribution smoothing and hybrid filteringWhether you're a database engineer, systems researcher, or student exploring data structures, this episode reveals how cutting-edge research can transform how we query, filter, and scale modern data systems.Links:Diva: Dynamic Range Filter for Var-Length Keys and Queries [VLDB'25]Diva on GitHubNavid's LinkedIn
15. Adaptive Factorization in DuckDB with Paul Groß
51:15||Season 7, Ep. 15In this episode of the DuckDB in Research series, host Jack Waudby sits down with Paul Groß, PhD student at CWI Amsterdam, to explore his work on adaptive factorization and worst-case optimal joins - techniques that push the boundaries of analytical query performance.Paul shares insights from his CIDR'25 paper “Adaptive Factorization Using Linear Chained Hash Tables”, revealing how decades of database theory meet modern, practical system design in DuckDB. From hash table internals to adaptive query planning, this episode uncovers how research innovations are becoming part of real-world systems.Whether you’re a database researcher, engineer, or curious student, you’ll come away with a deeper understanding of query optimization and the realities of systems engineering.Links:Adaptive Factorization Using Linear-Chained Hash Tables
14. Parachute: Rethinking Query Execution and Bidirectional Information Flow in DuckDB - with Mihail Stoian
36:34||Season 7, Ep. 14In this episode of the DuckDB in Research series, host Jack Waudby sits down with Mihail Stoian, PhD student at the Data Systems Lab, University of Technology Nuremberg, to unpack the cutting-edge ideas behind Parachute, a new approach to robust query processing and bidirectional information passing in modern analytical databases.We explore how Parachute bridges theory and practice, combining concepts from instance-optimal algorithms and semi-join filtering to boost performance in DuckDB, the in-process analytical SQL engine that’s reshaping how research meets real-world data systems.Mihail discusses:How Parachute extends semi-join filtering for two-way information flowThe challenges of implementing research ideas inside DuckDBPractical performance gains on TPC-H and CEB workloadsThe future of adaptive query processing and research-driven system designWhether you're a database researcher, systems engineer, or curious practitioner, this deep-dive reveals how academic innovation continues to shape modern data infrastructure.Links:Parachute: Single-Pass Bi-Directional Information Passing VLDB 2025 PaperMihail's homepageParachute's Github repo
13. Anarchy in the Database: Abigale Kim on DuckDB and DBMS Extensibility
46:24||Season 7, Ep. 13In this episode of the DuckDB in Research series, host Jack Waudby talks with Abigale Kim, PhD student at the University of Wisconsin–Madison and author of VLDB 2025 paper: “Anarchy in the Database: A Survey and Evaluation of DBMS Extensibility”. They explore how database extensibility is reshaping modern data systems — and why DuckDB is emerging as the gold standard for safe, flexible, and high-performance extensions. Abigale shares the inside story of her research, the surprises uncovered when testing Postgres and DuckDB extensions, and what’s next for extensibility and composable database design.This episode is perfect for researchers, practitioners, and students interested in databases, systems design, and the interplay between academia and industry innovation.Highlights:What “extensibility” really means in a DBMSHow DuckDB compares to Postgres, MySQL, and RedisThe rise of GPU-accelerated DuckDB extensionsWhy bridging research and engineering matters for the future of databasesLinks:Anarchy in the Database: A Survey and Evaluation of Database Management System Extensibility VLDB 2025Rethinking Analytical Processing in the GPU EraYou can find Abigale at:XBlueskyPersonal site
12. Recursive CTEs, Trampolines, and Teaching Databases with DuckDB - with Prof. Torsten Grust
51:05||Season 7, Ep. 12In this episode of the DuckDB in Research series, host Dr Jack Waudby talks with Professor Torsten Grust from the University of Tübingen. Torsten is one of the pioneers behind DuckDB’s implementation of recursive CTEs.In the episode they unpack:The power of recursive CTEs and how they turn SQL into a full-fledged programming language.The story behind adding recursion to DuckDB, including the using key feature and the trampoline and TTL extensions emerging from Torsten’s lab.How these ideas are transforming research, teaching, and even DuckDB’s internal architecture.Why DuckDB makes databases exciting again — from classroom to cutting-edge systems research.If you’re into data systems, query processing, or bridging research and practice, this episode is for you.Links:USING KEY in Recursive CTEsHow DuckDB is USING KEY to Unlock Recursive Query PerformanceTrampoline-Style Queries for SQLU Tübingen Advent of codeA Fix for the Fixation on FixpointsOne WITH RECURSIVE is Worth Many GOTOsTorsten's homepageTorsten's X
11. DuckDB in Research S2 Coming Soon!
02:06||Season 7, Ep. 11Hey folks! The DuckDB in Research series is back for S2!In this season we chat with:Torsten Grust: Recursive CTEsAbigale Kim: Anarchy in the DatabaseMihail Stoian: Parachute: Single-Pass Bi-Directional Information PassingPaul Gross: Adaptive Factorization Using Linear-Chained Hash TablesWhether you're a researcher, engineer, or just curious about the intersection of databases and innovation we are sure you will love this series.
26. Rohan Padhye & Ao Li | Fray: An Efficient General-Purpose Concurrency JVM Testing Platform | #66
58:45||Season 6, Ep. 26In this episode of Disseminate: The Computer Science Research Podcast, guest host Bogdan Stoica sits down with Ao Li and Rohan Padhye (Carnegie Mellon University) to discuss their OOPSLA 2025 paper: "Fray: An Efficient General-Purpose Concurrency Testing Platform for the JVM".We dive into:Why concurrency bugs remain so hard to catch -- even in "well-tested" Java projects.The design of Fray, a new concurrency testing platform that outperforms prior tools like JPF and rr.Real-world bugs discovered in Apache Kafka, Lucene, and Google Guava.The gap between academic research and industrial practice, and how Fray bridges it.What’s next for concurrency testing: debugging tools, distributed systems, and beyond.If you’re a Java developer, systems researcher, or just curious about how to make software more reliable, this conversation is packed with insights on the future of software testing.Links & Resources:- The Fray paper (OOPSLA 2025):- Fray on GitHub- Ao Li’s research - Rohan Padhye’s research Don’t forget to like, subscribe, and hit the 🔔 to stay updated on the latest episodes about cutting-edge computer science research.#Java #Concurrency #SoftwareTesting #Fray #OOPSLA2025 #Programming #Debugging #JVM #ComputerScience #ResearchPodcast