Share

cover art for Hani Al-Sayeh | Juggler: Autonomous Cost Optimization and Performance Prediction of Big Data Applications | #6

Disseminate

Hani Al-Sayeh | Juggler: Autonomous Cost Optimization and Performance Prediction of Big Data Applications | #6

Season 1, Ep. 6
Summary:

Distributed in-memory processing frameworks accelerate iterative workloads by caching suitable datasets in memory rather than recomputing them in each iteration. Selecting appropriate datasets to cache as well as allocating a suitable cluster configuration for caching these datasets play a crucial role in achieving optimal performance. In practice, both are tedious, time-consuming tasks and are often neglected by end users, who are typically not aware of workload semantics, sizes of intermediate data, and cluster specification. To address these problems, Hani and his colleagues developed Juggler, an end-to-end framework, which autonomously selects appropriate datasets for caching and recommends a correspondingly suitable cluster configuration to end users, with the aim of achieving optimal execution time and cost.


Questions:

1:02 - Can you introduce your work and describe the current workflow for developing big data applications in the cloud?

2:49 - What is the challenge (maybe hidden challenge) facing application developers in this workflow? What harms performance?

5:36 - How does Juggler solve this problem?

11:55 - As an end user, how do I interact with Juggler?

14:07 - Can you talk us through your evaluation of Juggler? What were the key insights?

16:30 - What other tools are similar to Juggler? How do they compare?

18:17 - What are the limitations of Juggler?

21:57 - Who will find Juggler the most useful? Who is it for?

24:05 - Is Juggler publicly available?

24:23 - What is the most interesting (maybe unexpected) lesson you learned while working on this topic?

27:50 - What is next for Juggler? What do you have planned for future research?

28:49 - What attracted you to this research area? 

29:45 - What do you think is the biggest challenge now in this area?


Links:


Contact:

More episodes

View all episodes

  • 11. Eleni Zapridou | Oligolithic Cross-task Optimizations across Isolated Workloads | #51

    38:42
    In this episode, we talk to Eleni Zapridou and delve into the challenges of data processing within enterprises, where multiple applications operate concurrently on shared resources. Traditional resource boundaries between applications often lead to increased costs and resource consumption. However, as Eleni explains the principle of functional isolation offers a solution by combining cross-task optimizations with performance isolation. We explore GroupShare, an innovative strategy that reduces CPU consumption and query latency, transforming data processing efficiency. Join us as we discuss the implications of functional isolation with Eleni and its potential to revolutionize enterprise data processing.Links:CIDR'24 PaperEleni's TwitterEleni's LinkedIn
  • 10. Pat Helland | Scalable OLTP in the Cloud: What’s the BIG DEAL? | #50

    01:20:03
    In this thought-provoking podcast episode, we dive into the world of scalable OLTP (OnLine Transaction Processing) systems with the insightful Pat Helland. As a seasoned expert in the field, Pat shares his insights on the critical role of isolation semantics in the scalability of OLTP systems, emphasizing its significance as the "BIG DEAL." By examining the interface between OLTP databases and applications, particularly through the lens of RCSI (READ COMMITTED SNAPSHOT ISOLATION) SQL databases, Pat talks about the limitations imposed by current database architectures and application patterns on scalability.Through a compelling thought experiment, Pat explores the asymptotic limits to scale for OLTP systems, challenging the status quo and envisioning a reimagined approach to building both databases and applications that empowers scalability while adhering to established to RCSI. By shedding light on how today's popular databases and common app patterns may unnecessarily hinder scalability, Pat sparks discussions within the database community, paving the way for new opportunities and advancements in OLTP systems. Join us as we delve into this conversation with Pat Helland, where every insight shared could potentially catalyze significant transformations in the realm of OLTP scalability.Papers mentioned during the episode:Scalable OLTP in the Cloud: What’s the BIG DEAL?Autonomous ComputingDecoupled TransactionsDon't Get Stuck in the "Con" GameThe Best Place to Build a SubwayBuilding on QuicksandSide effects, front and centerImmutability changes everythingIs Scalable OLTP in the Cloud a solved problem?You can find Pat on:Twitter/XLinkedInScattered Thoughts on Distributed Systems
  • 9. Rui Liu | Towards Resource-adaptive Query Execution in Cloud Native Databases | #49

    53:52
    In this episode, we talk to Rui Liu and explore the transformative potential of Ratchet, a groundbreaking resource-adaptive query execution framework. We delve into the challenges posed by ephemeral resources in modern cloud environments and the innovative solutions offered by Ratchet. Rui guides us through the intricacies of Ratchet's design, highlighting its ability to enable adaptive query suspension and resumption, sophisticated resource arbitration for diverse workloads, and a fine-grained pricing model to navigate fluctuating resource availability. Join us as we uncover the future of cloud-native databases and workloads, and discover how Ratchet is poised to revolutionize the way we harness the power of dynamic cloud resources.Links:CIDR'24 PaperRui's LinkedIn Rui's Twitter/XRui's HomepageYou can find links to all Rui's work from his Google Scholar profile.
  • 8. Yifei Yang | Predicate Transfer: Efficient Pre-Filtering on Multi-Join Queries | #48

    47:37
    In this episode, Yifei Yang introduces predicate transfer, a revolutionary method for optimizing join performance in databases. Predicate transfer builds on Bloom joins, extending its benefits to multi-table joins. Inspired by Yannakakis's theoretical insights, predicate transfer leverages Bloom filters to achieve significant speed improvements. Yang's evaluation shows an average 3.3× performance boost over Bloom join on the TPC-H benchmark, highlighting the potential of predicate transfer to revolutionize database query optimization. Join us as we explore the transformative impact of predicate transfer on database operations.Links:CIDR'24 PaperYifei's LinkedInBuy Me A CoffeeListener Survey
  • 7. Vikramank Singh | Panda: Performance Debugging for Databases using LLM Agents | #47

    01:08:12
    In this episode, Vikramank Singh introduces the Panda framework, aimed at refining Large Language Models' (LLMs) capability to address database performance issues. Vikramank elaborates on Panda's four components—Grounding, Verification, Affordance, and Feedback—illustrating how they collaborate to contextualize LLM responses and deliver actionable recommendations. By bridging the divide between technical knowledge and practical troubleshooting needs, Panda has the potential to revolutionize database debugging practices, offering a promising avenue for more effective and efficient resolution of performance challenges in database systems. Tune in to learn more! Links:CIDR'24 PaperVikramank's LinkedIn
  • 6. Tamer Eldeeb | Chablis: Fast and General Transactions in Geo-Distributed Systems | #46

    01:02:27
    In this episode, Tamer Eldeeb sheds light on the challenges faced by geo-distributed database management systems (DBMSes) in supporting strictly-serializable transactions across multiple regions. He discusses the compromises often made between low-latency regional writes and restricted programming models in existing DBMS solutions. Tamer introduces Chablis, a groundbreaking geo-distributed, multi-versioned transactional key-value store designed to overcome these limitations.Chablis offers a general interface accommodating range and point reads, along with writes within multi-step strictly-serializable ACID transactions. Leveraging advancements in low-latency datacenter networks and innovative DBMS designs, Chablis eliminates the need for compromises, ensuring fast read-write transactions with low latency within a single region, while enabling global strictly-serializable lock-free snapshot reads. Join us as we explore the transformative potential of Chablis in revolutionizing the landscape of geo-distributed DBMSes and facilitating seamless transactional operations across distributed environments.CIDR'24 Chablis PaperOSDI'23 Chardonnay paperTamer's Linkedin
  • 5. Matt Butrovich | Tigger: A Database Proxy That Bounces With User-Bypass | #45

    01:03:55
    Summary: In this episode, we chat to Matt Butrovich about his research on database proxies. We discuss the inefficiencies of traditional database proxies, which operate in user-space, causing overhead due to buffer copying and system calls. Matt introduces "user-bypass" which leverages Linux's eBPF infrastructure to move application logic into kernel-space. Matt then tells us about Tigger, a PostgreSQL-compatible DBMS proxy, showcasing user-bypass benefits. Tune in to hear about the experiments that demonstrate how Tigger can achieve up to a 29% reduction in transaction latencies and a 42% reduction in CPU utilization compared to other widely-used proxies.Links: Matt's homepageVLDB'23 paperTigger's Github repo
  • 4. Gábor Szárnyas | The LDBC Social Network Benchmark: Business Intelligence Workload | #44

    46:34
    Summary: In this episode, Gábor Szárnyas takes us on a journey through the LDBC Social Network Benchmark's Business Intelligence workload (SNB BI). Developed through collaboration between academia and industry the SNB BI is a comprehensive graph OLAP benchmark. It pushes the boundaries of synthetic and scalable analytical database benchmarks, featuring a sophisticated data generator and a temporal graph with small-world phenomena. The benchmark's query workload, rooted in LDBC's innovative design methodology, aims to drive future technical advancements in graph database systems. Gabor highlights SNB BI's unique features, including the adoption of "parameter curation" for stable query runtimes across diverse parameters. Join us for a succinct yet insightful exploration of SNB BI, where Gábor Szárnyas unveils the intricacies shaping the forefront of analytical data systems and graph workloads.Links: VLDB'23 PaperGabor's HomepageLDBC HomepageLDBC GitHub
  • 3. Thaleia Doudali | Is Machine Learning Necessary for Cloud Resource Usage Forecasting? | #43

    49:13
    Summary:In this week's episode, we talk with Thaleia Doudali and explore the realm of cloud resource forecasting, focusing on the use of Long Short Term Memory (LSTM) neural networks, a popular machine learning model. Drawing from her research, Thaleia discusses the surprising discovery that, despite the complexity of ML models, accurate predictions often boil down to a simple shift of values by one time step. The discussion explores the nuances of time series data, encompassing resource metrics like CPU, memory, network, and disk I/O across different cloud providers and levels. Thaleia highlights the minimal variations observed in consecutive time steps, prompting a critical question: Do we really need complex machine learning models for effective forecasting? The episode concludes with Thaleia's vision for practical resource management systems, advocating for a thoughtful balance between simple solutions, such as data shifts, and the application of machine learning. Tune in as we unravel the layers of cloud resource forecasting with Thaleia Doudali.Links:SoCC'23 PaperThaleia's HomepageIMDEA Software HomepageGitHub Repo