Disseminate

  • 15. Adaptive Factorization in DuckDB with Paul Groß

    51:15||Season 7, Ep. 15
    In this episode of the DuckDB in Research series, host Jack Waudby sits down with Paul Groß, PhD student at CWI Amsterdam, to explore his work on adaptive factorization and worst-case optimal joins - techniques that push the boundaries of analytical query performance.Paul shares insights from his CIDR'25 paper “Adaptive Factorization Using Linear Chained Hash Tables”, revealing how decades of database theory meet modern, practical system design in DuckDB. From hash table internals to adaptive query planning, this episode uncovers how research innovations are becoming part of real-world systems.Whether you’re a database researcher, engineer, or curious student, you’ll come away with a deeper understanding of query optimization and the realities of systems engineering.Links:Adaptive Factorization Using Linear-Chained Hash Tables
  • 14. Parachute: Rethinking Query Execution and Bidirectional Information Flow in DuckDB - with Mihail Stoian

    36:34||Season 7, Ep. 14
    In this episode of the DuckDB in Research series, host Jack Waudby sits down with Mihail Stoian, PhD student at the Data Systems Lab, University of Technology Nuremberg, to unpack the cutting-edge ideas behind Parachute, a new approach to robust query processing and bidirectional information passing in modern analytical databases.We explore how Parachute bridges theory and practice, combining concepts from instance-optimal algorithms and semi-join filtering to boost performance in DuckDB, the in-process analytical SQL engine that’s reshaping how research meets real-world data systems.Mihail discusses:How Parachute extends semi-join filtering for two-way information flowThe challenges of implementing research ideas inside DuckDBPractical performance gains on TPC-H and CEB workloadsThe future of adaptive query processing and research-driven system designWhether you're a database researcher, systems engineer, or curious practitioner, this deep-dive reveals how academic innovation continues to shape modern data infrastructure.Links:Parachute: Single-Pass Bi-Directional Information Passing VLDB 2025 PaperMihail's homepageParachute's Github repo
  • 13. Anarchy in the Database: Abigale Kim on DuckDB and DBMS Extensibility

    46:24||Season 7, Ep. 13
    In this episode of the DuckDB in Research series, host Jack Waudby talks with Abigale Kim, PhD student at the University of Wisconsin–Madison and author of VLDB 2025 paper: “Anarchy in the Database: A Survey and Evaluation of DBMS Extensibility”. They explore how database extensibility is reshaping modern data systems — and why DuckDB is emerging as the gold standard for safe, flexible, and high-performance extensions. Abigale shares the inside story of her research, the surprises uncovered when testing Postgres and DuckDB extensions, and what’s next for extensibility and composable database design.This episode is perfect for researchers, practitioners, and students interested in databases, systems design, and the interplay between academia and industry innovation.Highlights:What “extensibility” really means in a DBMSHow DuckDB compares to Postgres, MySQL, and RedisThe rise of GPU-accelerated DuckDB extensionsWhy bridging research and engineering matters for the future of databasesLinks:Anarchy in the Database: A Survey and Evaluation of Database Management System Extensibility VLDB 2025Rethinking Analytical Processing in the GPU EraYou can find Abigale at:XBlueskyPersonal site
  • 12. Recursive CTEs, Trampolines, and Teaching Databases with DuckDB - with Prof. Torsten Grust

    51:05||Season 7, Ep. 12
    In this episode of the DuckDB in Research series, host Dr Jack Waudby talks with Professor Torsten Grust from the University of Tübingen. Torsten is one of the pioneers behind DuckDB’s implementation of recursive CTEs.In the episode they unpack:The power of recursive CTEs and how they turn SQL into a full-fledged programming language.The story behind adding recursion to DuckDB, including the using key feature and the trampoline and TTL extensions emerging from Torsten’s lab.How these ideas are transforming research, teaching, and even DuckDB’s internal architecture.Why DuckDB makes databases exciting again — from classroom to cutting-edge systems research.If you’re into data systems, query processing, or bridging research and practice, this episode is for you.Links:USING KEY in Recursive CTEsHow DuckDB is USING KEY to Unlock Recursive Query PerformanceTrampoline-Style Queries for SQLU Tübingen Advent of codeA Fix for the Fixation on FixpointsOne WITH RECURSIVE is Worth Many GOTOsTorsten's homepageTorsten's X
  • 11. DuckDB in Research S2 Coming Soon!

    02:06||Season 7, Ep. 11
    Hey folks! The DuckDB in Research series is back for S2!In this season we chat with:Torsten Grust: Recursive CTEsAbigale Kim: Anarchy in the DatabaseMihail Stoian: Parachute: Single-Pass Bi-Directional Information PassingPaul Gross: Adaptive Factorization Using Linear-Chained Hash TablesWhether you're a researcher, engineer, or just curious about the intersection of databases and innovation we are sure you will love this series.
  • 26. Rohan Padhye & Ao Li | Fray: An Efficient General-Purpose Concurrency JVM Testing Platform | #66

    58:45||Season 6, Ep. 26
    In this episode of Disseminate: The Computer Science Research Podcast, guest host Bogdan Stoica sits down with Ao Li and Rohan Padhye (Carnegie Mellon University) to discuss their OOPSLA 2025 paper: "Fray: An Efficient General-Purpose Concurrency Testing Platform for the JVM".We dive into:Why concurrency bugs remain so hard to catch -- even in "well-tested" Java projects.The design of Fray, a new concurrency testing platform that outperforms prior tools like JPF and rr.Real-world bugs discovered in Apache Kafka, Lucene, and Google Guava.The gap between academic research and industrial practice, and how Fray bridges it.What’s next for concurrency testing: debugging tools, distributed systems, and beyond.If you’re a Java developer, systems researcher, or just curious about how to make software more reliable, this conversation is packed with insights on the future of software testing.Links & Resources:- The Fray paper (OOPSLA 2025):- Fray on GitHub- Ao Li’s research - Rohan Padhye’s research Don’t forget to like, subscribe, and hit the 🔔 to stay updated on the latest episodes about cutting-edge computer science research.#Java #Concurrency #SoftwareTesting #Fray #OOPSLA2025 #Programming #Debugging #JVM #ComputerScience #ResearchPodcast
  • 25. Shrey Tiwari | It's About Time: A Study of Date and Time Bugs in Python Software | #65

    01:05:29||Season 6, Ep. 25
    In this episode, Bogdan Stoica, Postdoctoral Research Associate in the SysNet group at the University of Illinois Urbana-Champaign (UIUC) steps in to guest host. Bogdan sits down with Shrey Tiwari, a PhD student in the Software and Societal Systems Department at Carnegie Mellon University and member of the PASTA Lab, advised by Prof. Rohan Padhye. Together, they dive into Shrey’s award-winning research on date and time bugs in open-source Python software, exploring why these issues are so deceptively tricky and how they continue to affect systems we rely on every day.The conversation traces Shrey’s journey from industry to research, including formative experiences at Citrix and Microsoft Research, and how those shaped his passion for software reliability. Shrey and Bogdan discuss the surprising complexity of date and time handling, the methodology behind Shrey’s empirical study, and the practical lessons developers can take away to build more robust systems. Along the way, they highlight broader questions about testing, bug detection, and the future role of AI in ensuring software correctness. This episode is a must-listen for anyone interested in debugging, reliability, and the hidden challenges that underpin modern software.Links:It’s About Time: An Empirical Study of Date and Time Bugs in Open-Source Python Software 🏆 ACM SIGSOFT Distinguished Paper AwardShrey's homepage
  • 24. Lessons Learned from Five Years of Artifact Evaluations at EuroSys | #64

    43:48||Season 6, Ep. 24
    In this episode we are joined by Thaleia Doudali, Miguel Matos, and Anjo Vahldiek-Oberwagner to delve into five years of experience managing artifact evaluation at the EuroSys conference. They explain the goals and mechanics of artifact evaluation, a voluntary process that encourages reproducibility and reusability in computer systems research by assessing the supporting code, data, and documentation of accepted papers. The conversation outlines the three-tiered badge system, the multi-phase review process, and the importance of open-source practices. The guests present data showing increasing participation, sustained artifact availability, and varying levels of community engagement, underscoring the growing relevance of artifacts in validating and extending research.The discussion also highlights recurring challenges such as tight timelines between paper acceptance and camera-ready deadlines, disparities in expectations between main program and artifact committees, difficulties with specialized hardware requirements, and lack of institutional continuity among evaluators. To address these, the guests propose early artifact preparation, stronger integration across committees, formalization of evaluation guidelines, and possibly making artifact submission mandatory. They advocate for broader standardization across CS subfields and suggest introducing a “Test of Time” award for artifacts. Looking to the future, they envision a more scalable, consistent, and impactful artifact evaluation process—but caution that continued growth in paper volume will demand innovation to maintain quality and reviewer sustainability.Links:Lessons Learned from Five Years of Artifact Evaluations at EuroSys [DOI] Thaleia's HomepageAnjo's HomepageMiguel's Homepage
  • 23. Dominik Winterer | Validating SMT Solvers for Correctness and Performance via Grammar-based Enumeration | #63

    43:38||Season 6, Ep. 23
    In this episode of the Disseminate podcast, Dominik Winterer discusses his research on SMT (Satisfiability Modulo Theories) solvers and his recent OOPSLA paper titled "Validating SMT Solvers for Correction and Performance via Grammar Based Enumeration". Dominik shares his academic journey from the University of Freiburg to ETH Zurich, and now to a lectureship at the University of Manchester. He introduces ET, a tool he developed for exhaustive grammar-based testing of SMT solvers. Unlike traditional fuzzers that use random input generation, ET systematically enumerates small, syntactically valid inputs using context-free grammars to expose bugs more effectively. This approach simplifies bug triage and has revealed over 100 bugs—many of them soundness and performance-related—with a striking number having already been fixed. Dominik emphasizes the tool’s surprising ability to identify deep bugs using minimal input and track solver evolution over time, highlighting ET's potential for continuous integration into CI pipelines.The conversation then expands into broader reflections on formal methods and the future of software reliability. Dominik advocates for a new discipline—Formal Methods Engineering—to bridge the gap between software engineering and formal verification tools. He stresses the importance of building trustworthy verification tools since the reliability of software increasingly depends on them. Dominik also discusses adapting ET to other domains, such as JavaScript engines, and suggests that grammar-based enumeration can be applied widely to any system with a context-free grammar. Addressing the rise of AI, he envisions validation portfolios that integrate formal methods into LLM-based tooling, offering certified assessments of model outputs. He closes with a call for the community to embrace pragmatic, systematic, and scalable approaches to formal methods to ensure these tools can live up to their promises in real-world development settings.Links:Dominik's HomepageValidating SMT Solvers for Correctness and Performance via Grammar-Based Enumeration
loading...