cover art for George Theodorakis | Scabbard: Single-Node Fault-Tolerant Stream Processing | #12

Disseminate: The Computer Science Research Podcast

George Theodorakis | Scabbard: Single-Node Fault-Tolerant Stream Processing | #12

Season 2, Ep. 2
Summary (VLDB abstract):

Single-node multi-core stream processing engines (SPEs) can process hundreds of millions of tuples per second. Yet making them fault-tolerant with exactly-once semantics while retaining this performance is an open challenge: due to the limited I/O bandwidth of a single-node, it becomes infeasible to persist all stream data and operator state during execution. Instead, single-node SPEs rely on upstream distributed systems, such as Apache Kafka, to recover stream data after failure, necessitating complex clusterbased deployments. This lack of built-in fault-tolerance features has hindered the adoption of single-node SPEs. We describe Scabbard, the frst single-node SPE that supports exactly-once fault-tolerance semantics despite limited local I/O bandwidth. Scabbard achieves this by integrating persistence operations with the query workload. Within the operator graph, Scabbard determines when to persist streams based on the selectivity of operators: by persisting streams after operators that discard data, it can substantially reduce the required I/O bandwidth. As part of the operator graph, Scabbard supports parallel persistence operations and uses markers to decide when to discard persisted data. The persisted data volume is further reduced using workload-specifc compression: Scabbard monitors stream statistics and dynamically generates computationally efcient compression operators. Our experiments show that Scabbard can execute stream queries that process over 200 million tuples per second while recovering from failures with sub-second latencies.

  • Can start off by explaining what stream processing is and its common use cases?  
  • How did you end up researching in this area? 
  • What is Scabbard? 
  • Can you explain the differences between single-node and distributed SPEs? 
  • What are the advantages of single-node SPEs? 
  • What are the pitfalls that have limited single-node SPEs adoption?
  • What were your design goals when developing Scabbard?
  • What is the key idea underpinning Scabbard?
  • In the paper you state there are 3 main contributions in Scabbard can you talk us through each one;
  • How did you implement Scabbard? Give an overview of architecture?
  • What was your approach to evaluating Scabbard? What were the questions you were trying to answer?
  • What did you compare Scabbard against? What was the experimental set up?
  • What were the key results?
  • Are there any situations when Scabbard’s performance is sub-optimal? What are the limitations? 
  • Is Scabbard publicly available?  
  • As a software developer how do I interact with Scabbard?
  • What are the most interesting and perhaps unexpected lessons that you have learned while working on Scabbard?
  • Progress in research is non-linear, from the conception of the idea for Scabbard to the publication, were there things you tried that failed? 
  • What do you have planned for future research with Scabbard?
  • Can you tell the listeners about your other research?  
  • How do you approach idea generation and selecting projects? 
  • What do you think is the biggest challenge in your research area now? 
  • What’s the one key thing you want listeners to take away from your research?


More episodes

View all episodes

  • 10. Mohamed Alzayat | Groundhog: Efficient Request Isolation in FaaS | #40

    Summary:Security is a core responsibility for Function-as-a-Service (FaaS) providers. The prevailing approach has each function execute in its own container to isolate concurrent executions of different functions. However, successive invocations of the same function commonly reuse the runtime state of a previous invocation in order to avoid container cold-start delays when invoking a function. Although efficient, this container reuse has security implications for functions that are invoked on behalf of differently privileged users or administrative domains: bugs in a function’s implementation, third-party library, or the language runtime may leak private data from one invocation of the function to subsequent invocations of the same function.In this episode, Mohamed Alzayat tells us about Groundhog, which isolates sequential invocations of a function by efficiently reverting to a clean state, free from any private data, after each invocation. Tune in to learn more about how Groundhog works and how it improves security in FaaS!Links:Mohamed's homepageGroundhog EuroSys'23 paperGroundhog codebase
  • 9. Cuong Nguyen | Detock: High Performance Multi-region Transactions at Scale | #39

    Summary: In this episode Cuong Nguyen tells us about Detock, a geographically replicated database system. Tune in to learn about its specialised concurrency control and deadlock resolution protocols that enable processing strictly-serializable multi-region transactions with near-zero performance degradation at extremely high conflict and improves latency by up to a factor of 5.Links: SIGMOD PaperDetock Github RepoCuong's Homepage
  • 8. Bogdan Stoica | WAFFLE: Exposing Memory Ordering Bugs Efficiently with Active Delay Injection | #38

    Concurrency bugs are difficult to detect, reproduce, and diagnose, as they manifest under rare timing conditions. Recently, active delay injection has proven efficient for exposing one such type of bug — thread-safety violations — with low over-head, high coverage, and minimal code analysis. However, how to efficiently apply active delay injection to broader classes of concurrency bugs is still an open question.In this episode, Bogdan Stoica tells us about how answered this question by focusing on MemOrder bugs — a type of concurrency bug caused by incorrect timing between a memory access to a particular object and the object’s initialization or deallocation. Tune to learn about Waffle — a delay injection tool that tailors key design points to better match the nature of MemOrder bugs. Links: EuroSys'23 PaperBogdan's HomepageWaffle's GitHub Repo
  • 7. Roger Waleffe | MariusGNN: Resource-Efficient Out-of-Core Training of Graph Neural Networks | #37

    Summary: In this episode, Roger Waleffe talks about Graph Neural Networks (GNNs) for large-scale graphs. Specifically, he reveals all about MariusGNN, the first system that utilises the entire storage hierarchy (including disk) for GNN training. Tune in to find out how MaruisGNN works and just how fast it goes (and how much more cost-efficient it is!) Links: Marius ProjectRoger's Homepage Roger's TwitterEuroSys'23 PaperSupport the podcast through Buy Me a Coffee
  • 6. Madelon Hulsebos | GitTables: A Large-Scale Corpus of Relational Tables | #36

    Summary:The success of deep learning has sparked interest in improving relational table tasks, like data preparation and search, with table representation models trained on large table corpora. Existing table corpora primarily contain tables extracted from HTML pages, limiting the capability to represent offline database tables. To train and evaluate high-capacity models for applications beyond the Web, we need resources with tables that resemble relational database tables. In this episode, Madelon Hulsebos tells us all about such a resource! Tune in to learn more about GitTables!! Links: Madelon's websiteGitTables homepageSIGMOD'23 paperBuy Me A Coffee!
  • 5. Tarikul Islam Papon | ACEing the Bufferpool Management Paradigm for Modern Storage Devices | #35

    Summary:Compared to hard disk drives (HDDs), solid-state drives (SSDs) have two fundamentally different properties: (i) read/write asymmetry (writes are slower than reads) and (ii) access concurrency (multiple I/Os can be executed in parallel to saturate the device bandwidth). But, database operators are often designed without considering storage asymmetry and concurrency resulting in device under utilization. In thie episode, Tarikul Islam Papon tells us about his work on a new Asymmetry & Concurrency aware bufferpool management (ACE) that batches writes based on device concurrency and performs them in parallel to amortize the asymmetric write cost. Tune in to learn more! Links:ICDE'23 PaperPapon's HomepagePapon's LinkedInBuy me a coffee
  • 4. Jian Zhang | VIPER: A Fast Snapshot Isolation Checker | #34

    Summary:Snapshot isolation is supported by most commercial databases and is widely used by applications. However, checking, if given a set of transactions, a database ensures Snapshot Isolation is either slow or gives up soundness. In this episode, Jian Zhang tells us about VIPER, an SI checker that is sound, complete, and fast. Tune in to learn more!! Links:PaperGitHub repoJian's homepage
  • 3. Ahmed Sayed | REFL: Resource Efficient Federated Learning | #33

    Summary: Federated Learning (FL) enables distributed training by learners using local data, thereby enhancing privacy and reducing communication. However, it presents numerous challenges relating to the heterogeneity of the data distribution, device capabilities, and participant availability as deployments scale, which can impact both model convergence and bias. Existing FL schemes use random participant selection to improve fairness; however, this can result in inefficient use of resources and lower quality training. In this episode, Ahmed Sayed talks about how he and his colleagues address the question of resource efficiency in FL. He talks about the benefits of intelligent participant selection, and incorporation of updates from straggling participants. Tune in to learn more!Links:EuroSys'23 PaperAhmed's LinkedIn Ahmed's HomepageAhmed's TwitterREFL Github
  • 2. Subhadeep Sarkar | Log-structured Merge Trees | #32

    Summary:Log-structured merge (LSM) trees have emerged as one of the most commonly used storage-based data structures in modern data systems as they offer high throughput for writes and good utilization of storage space. In this episode, Subhadeep Sarkar presents the fundamental principles of the LSM paradigm. He tells us about recent research on improving write performance and the various optimization techniques and hybrid designs adopted by LSM engines to accelerate reads. Tune in to find out more! Links:Personal websiteICDE'23 tutorialLinkedIn