Share

cover art for George Theodorakis | Scabbard: Single-Node Fault-Tolerant Stream Processing | #12

Disseminate

George Theodorakis | Scabbard: Single-Node Fault-Tolerant Stream Processing | #12

Season 2, Ep. 2
Summary (VLDB abstract):

Single-node multi-core stream processing engines (SPEs) can process hundreds of millions of tuples per second. Yet making them fault-tolerant with exactly-once semantics while retaining this performance is an open challenge: due to the limited I/O bandwidth of a single-node, it becomes infeasible to persist all stream data and operator state during execution. Instead, single-node SPEs rely on upstream distributed systems, such as Apache Kafka, to recover stream data after failure, necessitating complex clusterbased deployments. This lack of built-in fault-tolerance features has hindered the adoption of single-node SPEs. We describe Scabbard, the frst single-node SPE that supports exactly-once fault-tolerance semantics despite limited local I/O bandwidth. Scabbard achieves this by integrating persistence operations with the query workload. Within the operator graph, Scabbard determines when to persist streams based on the selectivity of operators: by persisting streams after operators that discard data, it can substantially reduce the required I/O bandwidth. As part of the operator graph, Scabbard supports parallel persistence operations and uses markers to decide when to discard persisted data. The persisted data volume is further reduced using workload-specifc compression: Scabbard monitors stream statistics and dynamically generates computationally efcient compression operators. Our experiments show that Scabbard can execute stream queries that process over 200 million tuples per second while recovering from failures with sub-second latencies.


Questions:
  • Can start off by explaining what stream processing is and its common use cases?  
  • How did you end up researching in this area? 
  • What is Scabbard? 
  • Can you explain the differences between single-node and distributed SPEs? 
  • What are the advantages of single-node SPEs? 
  • What are the pitfalls that have limited single-node SPEs adoption?
  • What were your design goals when developing Scabbard?
  • What is the key idea underpinning Scabbard?
  • In the paper you state there are 3 main contributions in Scabbard can you talk us through each one;
  • How did you implement Scabbard? Give an overview of architecture?
  • What was your approach to evaluating Scabbard? What were the questions you were trying to answer?
  • What did you compare Scabbard against? What was the experimental set up?
  • What were the key results?
  • Are there any situations when Scabbard’s performance is sub-optimal? What are the limitations? 
  • Is Scabbard publicly available?  
  • As a software developer how do I interact with Scabbard?
  • What are the most interesting and perhaps unexpected lessons that you have learned while working on Scabbard?
  • Progress in research is non-linear, from the conception of the idea for Scabbard to the publication, were there things you tried that failed? 
  • What do you have planned for future research with Scabbard?
  • Can you tell the listeners about your other research?  
  • How do you approach idea generation and selecting projects? 
  • What do you think is the biggest challenge in your research area now? 
  • What’s the one key thing you want listeners to take away from your research?


Links:

More episodes

View all episodes

  • 28. Xiangyao Yu | Disaggregation: A New Architecture for Cloud Databases | #68

    42:12||Season 6, Ep. 28
    In this episode of Disseminate: The Computer Science Research Podcast, host Jack Waudby sits down with Xiangyao Yu (UW–Madison), one of the leading voices shaping the next generation of cloud-native databases.We dive deep into disaggregation — the architectural shift transforming how modern data systems are built. Xiangyao breaks down:Why traditional shared-nothing databases struggle in cloud environmentsHow separating compute and storage unlocks elasticity, scalability, and cost efficiencyThe evolution of disaggregated systems, from Aurora and Snowflake through to advanced pushdown processing and new modular servicesHis team's research on reinventing core protocols like 2-phase commit for cloud-native environmentsReal-time analytics, HTAP challenges, and the Hermes architectureWhere disaggregation goes next — indexing, query optimizers, materialized views, multi-cloud architectures, and moreWhether you're a database engineer, researcher, or a practitioner building scalable cloud systems, this episode gives a clear, accessible look into the architecture that’s rapidly becoming the default for modern data platforms.Links:Xiangyao Yu's HomepageDisaggregation: A New Architecture for Cloud Databases [VLDB'25]
  • 27. Navid Eslami | Diva: Dynamic Range Filter for Var-Length Keys and Queries | #67

    46:50||Season 6, Ep. 27
    In this episode of Disseminate: The Computer Science Research Podcast, Jack sits down with Navid Eslami, PhD researcher at the University of Toronto, to discuss his award-winning paper “DIVA: Dynamic Range Filter for Variable Length Keys and Queries”, which earned Best Research Paper at VLDB.Navid breaks down how range filters extend the power of traditional filters for modern databases and storage systems, enabling faster queries, better scalability, and theoretical guarantees. We dive into:How DIVA overcomes the limitations of existing range filtersWhat makes it the “holy grail” of filtering for dynamic dataReal-world integration in WiredTiger (the MongoDB storage engine)Future challenges in data distribution smoothing and hybrid filteringWhether you're a database engineer, systems researcher, or student exploring data structures, this episode reveals how cutting-edge research can transform how we query, filter, and scale modern data systems.Links:Diva: Dynamic Range Filter for Var-Length Keys and Queries [VLDB'25]Diva on GitHubNavid's LinkedIn
  • 15. Adaptive Factorization in DuckDB with Paul Groß

    51:15||Season 7, Ep. 15
    In this episode of the DuckDB in Research series, host Jack Waudby sits down with Paul Groß, PhD student at CWI Amsterdam, to explore his work on adaptive factorization and worst-case optimal joins - techniques that push the boundaries of analytical query performance.Paul shares insights from his CIDR'25 paper “Adaptive Factorization Using Linear Chained Hash Tables”, revealing how decades of database theory meet modern, practical system design in DuckDB. From hash table internals to adaptive query planning, this episode uncovers how research innovations are becoming part of real-world systems.Whether you’re a database researcher, engineer, or curious student, you’ll come away with a deeper understanding of query optimization and the realities of systems engineering.Links:Adaptive Factorization Using Linear-Chained Hash Tables
  • 14. Parachute: Rethinking Query Execution and Bidirectional Information Flow in DuckDB - with Mihail Stoian

    36:34||Season 7, Ep. 14
    In this episode of the DuckDB in Research series, host Jack Waudby sits down with Mihail Stoian, PhD student at the Data Systems Lab, University of Technology Nuremberg, to unpack the cutting-edge ideas behind Parachute, a new approach to robust query processing and bidirectional information passing in modern analytical databases.We explore how Parachute bridges theory and practice, combining concepts from instance-optimal algorithms and semi-join filtering to boost performance in DuckDB, the in-process analytical SQL engine that’s reshaping how research meets real-world data systems.Mihail discusses:How Parachute extends semi-join filtering for two-way information flowThe challenges of implementing research ideas inside DuckDBPractical performance gains on TPC-H and CEB workloadsThe future of adaptive query processing and research-driven system designWhether you're a database researcher, systems engineer, or curious practitioner, this deep-dive reveals how academic innovation continues to shape modern data infrastructure.Links:Parachute: Single-Pass Bi-Directional Information Passing VLDB 2025 PaperMihail's homepageParachute's Github repo
  • 13. Anarchy in the Database: Abigale Kim on DuckDB and DBMS Extensibility

    46:24||Season 7, Ep. 13
    In this episode of the DuckDB in Research series, host Jack Waudby talks with Abigale Kim, PhD student at the University of Wisconsin–Madison and author of VLDB 2025 paper: “Anarchy in the Database: A Survey and Evaluation of DBMS Extensibility”. They explore how database extensibility is reshaping modern data systems — and why DuckDB is emerging as the gold standard for safe, flexible, and high-performance extensions. Abigale shares the inside story of her research, the surprises uncovered when testing Postgres and DuckDB extensions, and what’s next for extensibility and composable database design.This episode is perfect for researchers, practitioners, and students interested in databases, systems design, and the interplay between academia and industry innovation.Highlights:What “extensibility” really means in a DBMSHow DuckDB compares to Postgres, MySQL, and RedisThe rise of GPU-accelerated DuckDB extensionsWhy bridging research and engineering matters for the future of databasesLinks:Anarchy in the Database: A Survey and Evaluation of Database Management System Extensibility VLDB 2025Rethinking Analytical Processing in the GPU EraYou can find Abigale at:XBlueskyPersonal site
  • 12. Recursive CTEs, Trampolines, and Teaching Databases with DuckDB - with Prof. Torsten Grust

    51:05||Season 7, Ep. 12
    In this episode of the DuckDB in Research series, host Dr Jack Waudby talks with Professor Torsten Grust from the University of Tübingen. Torsten is one of the pioneers behind DuckDB’s implementation of recursive CTEs.In the episode they unpack:The power of recursive CTEs and how they turn SQL into a full-fledged programming language.The story behind adding recursion to DuckDB, including the using key feature and the trampoline and TTL extensions emerging from Torsten’s lab.How these ideas are transforming research, teaching, and even DuckDB’s internal architecture.Why DuckDB makes databases exciting again — from classroom to cutting-edge systems research.If you’re into data systems, query processing, or bridging research and practice, this episode is for you.Links:USING KEY in Recursive CTEsHow DuckDB is USING KEY to Unlock Recursive Query PerformanceTrampoline-Style Queries for SQLU Tübingen Advent of codeA Fix for the Fixation on FixpointsOne WITH RECURSIVE is Worth Many GOTOsTorsten's homepageTorsten's X
  • 11. DuckDB in Research S2 Coming Soon!

    02:06||Season 7, Ep. 11
    Hey folks! The DuckDB in Research series is back for S2!In this season we chat with:Torsten Grust: Recursive CTEsAbigale Kim: Anarchy in the DatabaseMihail Stoian: Parachute: Single-Pass Bi-Directional Information PassingPaul Gross: Adaptive Factorization Using Linear-Chained Hash TablesWhether you're a researcher, engineer, or just curious about the intersection of databases and innovation we are sure you will love this series.
  • 26. Rohan Padhye & Ao Li | Fray: An Efficient General-Purpose Concurrency JVM Testing Platform | #66

    58:45||Season 6, Ep. 26
    In this episode of Disseminate: The Computer Science Research Podcast, guest host Bogdan Stoica sits down with Ao Li and Rohan Padhye (Carnegie Mellon University) to discuss their OOPSLA 2025 paper: "Fray: An Efficient General-Purpose Concurrency Testing Platform for the JVM".We dive into:Why concurrency bugs remain so hard to catch -- even in "well-tested" Java projects.The design of Fray, a new concurrency testing platform that outperforms prior tools like JPF and rr.Real-world bugs discovered in Apache Kafka, Lucene, and Google Guava.The gap between academic research and industrial practice, and how Fray bridges it.What’s next for concurrency testing: debugging tools, distributed systems, and beyond.If you’re a Java developer, systems researcher, or just curious about how to make software more reliable, this conversation is packed with insights on the future of software testing.Links & Resources:- The Fray paper (OOPSLA 2025):- Fray on GitHub- Ao Li’s research - Rohan Padhye’s research Don’t forget to like, subscribe, and hit the 🔔 to stay updated on the latest episodes about cutting-edge computer science research.#Java #Concurrency #SoftwareTesting #Fray #OOPSLA2025 #Programming #Debugging #JVM #ComputerScience #ResearchPodcast
  • 25. Shrey Tiwari | It's About Time: A Study of Date and Time Bugs in Python Software | #65

    01:05:29||Season 6, Ep. 25
    In this episode, Bogdan Stoica, Postdoctoral Research Associate in the SysNet group at the University of Illinois Urbana-Champaign (UIUC) steps in to guest host. Bogdan sits down with Shrey Tiwari, a PhD student in the Software and Societal Systems Department at Carnegie Mellon University and member of the PASTA Lab, advised by Prof. Rohan Padhye. Together, they dive into Shrey’s award-winning research on date and time bugs in open-source Python software, exploring why these issues are so deceptively tricky and how they continue to affect systems we rely on every day.The conversation traces Shrey’s journey from industry to research, including formative experiences at Citrix and Microsoft Research, and how those shaped his passion for software reliability. Shrey and Bogdan discuss the surprising complexity of date and time handling, the methodology behind Shrey’s empirical study, and the practical lessons developers can take away to build more robust systems. Along the way, they highlight broader questions about testing, bug detection, and the future role of AI in ensuring software correctness. This episode is a must-listen for anyone interested in debugging, reliability, and the hidden challenges that underpin modern software.Links:It’s About Time: An Empirical Study of Date and Time Bugs in Open-Source Python Software 🏆 ACM SIGSOFT Distinguished Paper AwardShrey's homepage