Share

cover art for Kevin Gaffney | SQLite: Past, Present, and Future | #11

Disseminate

Kevin Gaffney | SQLite: Past, Present, and Future | #11

Season 2, Ep. 1
Summary:

In this episode Kevin Gaffney tells us about SQLite, the most widely deployed database engine in existence. SQLite is found in nearly every smartphone, computer, web browser, television, and automobile. Several factors are likely responsible for its ubiquity, including its in-process design, standalone codebase, extensive test suite, and cross-platform file format. While it supports complex analytical queries, SQLite is primarily designed for fast online transaction processing (OLTP), employing row-oriented execution and a B-tree storage format. However, fueled by the rise of edge computing and data science, there is a growing need for efficient in-process online analytical processing (OLAP). DuckDB, a database engine nicknamed “the SQLite for analytics”, has recently emerged to meet this demand. While DuckDB has shown strong performance on OLAP benchmarks, it is unclear how SQLite compares... Listen to the podcast to find out more about Kevin's work on identifying key bottlenecks in OLAP workloads and the optimizations he has helped develop.


Questions:
  • How did you end up researching databases? 
  • Can you describe what SQLite is? 
  • Can you give the listener an overview of SQLite’s architecture? 
  • How does SQLite provide ACID guarantees? 
  • How has hardware and workload changed across SQLite’s life? 
  • What challenges do these changes pose for SQLite?
  • In your paper you subject SQLite to an extensive performance evaluation, what were the questions you were trying to answer? 
  • What was the experimental set up? What benchmarks did you use?
  • How realistic are these workloads? How closely do these map to user studies? 
  • What were the key results in your OLTP experiments?
  • You mentioned that delete performance was poor in the user study, did you observe why in the OLTP experiment?
  • Can you talk us through your OLAP experiment?
  • What were the key analytical data processing bottlenecks you found in SQLite?
  • What were your optimizations? How did they perform? 
  • What are the reasons for SQLite using dynamic programming?
  • Are your optimizations available in SQLite today? 
  • What were the findings in your blob I/O experiment? 
  • Progress in research is non-linear, from the conception of the idea for your paper to the publication, were there things you tried that failed? 
  • What do you have planned for future research? 
  • How do you think SQLite will evolve over the coming years? 
  • Can you tell the listeners about your other research?
  • What do you think is the biggest challenge in your research area now? 
  • What’s the one key thing you want listeners to take away from your research?


Links:

More episodes

View all episodes

  • 22. Haralampos Gavriilidis | Fast and Scalable Data Transfer across Data Systems | #62

    56:46||Season 6, Ep. 22
    In this episode of Disseminate, we welcome Harry Gavrilidis back to the podcast to explore his latest research on fast and scalable data transfer across systems, soon to be presented at SIGMOD 2025. Building on his work with XDB, Harry introduces XDBC, a novel data transfer framework designed to balance performance and generalizability. They dive into the challenges of moving data across heterogeneous environments—ranging from cloud systems to IoT devices—and critique the limitations of current generic methods like JDBC and specialized point-to-point connectors.Harry walks us through the architecture of XDBC, which modularizes the data transfer pipeline into configurable stages like reading, serialization, compression, and networking. The episode highlights how this architecture adapts to varying performance constraints and introduces a cost-based optimizer to automate tuning for different environments. We also touch on future directions, including dynamic reconfiguration, fault tolerance, and learning-based optimizations. If you're interested in systems, performance engineering, or database interoperability, this episode is a must-listen.
  • 6. Haralampos Gavriilidis | SheetReader: Efficient spreadsheet parsing

    40:53||Season 10, Ep. 6
    In this episode of the DuckDB in Research series, Harry Gavriilidis (PhD student at TU Berlin) joins us to discuss Sheet Reader — a high-performance spreadsheet parser that dramatically outpaces traditional tools in both speed and memory efficiency. By taking advantage of the standardized structure of spreadsheet files and bypassing generic XML parsers, Sheet Reader delivers fast and lightweight parsing, even on large files. Now available as a DuckDB extension, it enables users to query spreadsheets directly with SQL and integrate them seamlessly into broader analytical workflows.Harry shares insights into the development process, performance benchmarks, and the surprisingly complex world of spreadsheet parsing. He also discusses community feedback, feature requests (like detecting multiple tables or parsing colored rows), and future plans — including tighter integration with DuckDB and support for Arrow. The conversation wraps up with a look at Harry’s broader research on composable database systems and data interoperability, highlighting how tools like DuckDB are reshaping modern data analysis.
  • 5. Arjen P. de Vries | faiss: An extension for vector data & search

    46:14||Season 10, Ep. 5
    In this episode of the DuckDB in Research series, we’re joined by Arjen de Vries, Professor of Data Science at Radboud University. Arjen dives into his team’s development of a DuckDB extension for FAISS, a library originally developed at Facebook for efficient similarity search and vector operations.We explore the growing importance of embeddings and dense retrieval in modern information retrieval systems, and how DuckDB’s zero-copy architecture and tight integration with the Python ecosystem make it a compelling choice for managing large-scale vector data. Arjen shares insights into the technical challenges and architectural decisions behind the extension, comparisons with DuckDB’s native VSS (vector search) solution, and the broader vision of integrating vector search more deeply into relational databases.Along the way, we also touch on DuckDB's extension ecosystem, its potential for future research, and why tools like this are reshaping how we build and query modern AI-enabled systems.
  • 4. David Justen | POLAR: Adaptive and non-invasive join order selection via plans of least resistance

    51:08||Season 10, Ep. 4
    In this episode, we sit down with David Justen to discuss his work on POLAR: Adaptive and Non-invasive Join Order Selection via Plans of Least Resistance which was implemented in DuckDB. David shares his journey in the database space, insights into performance optimization, and the challenges of working with modern analytical workloads. We dive into the intricacies of query compilation, vectorized execution, and how DuckDB is shaping the future of in-memory databases. Tune in for a deep dive into database internals, industry trends, and what’s next for high-performance data processing!Links: VLDB 2024 PaperDavid's Homepage
  • 3. Daniël ten Wolde | DuckPGQ: A graph extension supporting SQL/PGQ

    48:38||Season 10, Ep. 3
    In this episode, we sit down with Daniël ten Wolde, a PhD researcher at CWI’s Database Architectures Group, to explore DuckPGQ—an extension to DuckDB that brings powerful graph querying capabilities to relational databases. Daniel shares his journey into database research, the motivations behind DuckPGQ, and how it simplifies working with graph data. We also dive into the technical challenges of implementing SQL Property Graph Queries (SQL PGQ) in DuckDB, discuss performance benchmarks, and explore the future of DuckPGQ in graph analytics and machine learning. Tune in to learn how this cutting-edge extension is bridging the gap between research and industry!Links:DuckPGQ homepageCommunity extensionDaniel's homepage
  • 2. Till Döhmen | DuckDQ: A Python library for data quality checks in ML pipelines

    58:12||Season 10, Ep. 2
    In this episode we kick off our DuckDB in Research series with Till Döhmen, a software engineer at MotherDuck, where he leads AI efforts. Till shares insights into DuckDQ, a Python library designed for efficient data quality validation in machine learning pipelines, leveraging DuckDB’s high-performance querying capabilities.We discuss the challenges of ensuring data integrity in ML workflows, the inefficiencies of existing solutions, and how DuckDQ provides a lightweight, drop-in replacement that seamlessly integrates with scikit-learn. Till also reflects on his research journey, the impact of DuckDB’s optimizations, and the future potential of data quality tooling. Plus, we explore how AI tools like ChatGPT are reshaping research and productivity. Tune in for a deep dive into the intersection of databases, machine learning, and data validation!Resources:GitHubPaperSlidesTill's Homepagedatasketches extension (released by a DuckDB community member 2 weeks after we recorded!)
  • 1. Disseminate x DuckDB Coming Soon...

    02:40||Season 10, Ep. 1
    Hey folks! We have been collaborating with everyone's favourite in-process SQL OLAP database management system DuckDB to bring you a new podcast series - the DuckDB in Research series!At Disseminate our mission is to bridge the gap between research and industry by exploring research that has a real-world impact. DuckDB embodies this synergy—decades of research underpin its design, and now it’s making waves in the research community as a platform for others to build on and this is what the series will focus on! Join us as we kick off the series with:📌 Daniel ten Wolde – DuckPGQ, a graph workload extension for DuckDB supporting SQL/PGQ📌 David Justen – POLAR: Adaptive, non-invasive join order selection 📌 Till Döhmen – DuckDQ: A Python library for data quality checks in ML pipelines📌 Arjen de Vries – FAISS extension for vector similarity search in DuckDB📌 Harry Gavriilidis – SheetReader: Efficient spreadsheet parsingWhether you're a researcher, engineer, or just curious about the intersection of databases and innovation we are sure you will love this series. Subscribe now and stay tuned for our first episode! 🚀
  • 10. High Impact in Databases with... Anastasia Ailamaki

    46:17||Season 7, Ep. 10
    In this High Impact in Databases episode we talk to Anastasia Ailamaki.Anastasia is a Professor of Computer and Communication Sciences at the École Polytechnique Fédérale de Lausanne (EPFL). Tune in to hear Anastasia's story! The podcast is proudly sponsored by Pometry the developers behind Raphtory, the open source temporal graph analytics engine for Python and Rust.You can find Anastasia on:HomepageGoogle ScholarLinkedIn
  • 21. Anastasiia Kozar | Fault Tolerance Placement in the Internet of Things | #61

    49:02||Season 6, Ep. 21
    In this episode, we chat with Anastasiia Kozar about her research on fault tolerance in resource-constrained environments. As IoT applications leverage sensors, edge devices, and cloud infrastructure, ensuring system reliability at the edge poses unique challenges. Unlike the cloud, edge devices operate without persistent backups or high availability standards, leading to increased vulnerability to failures. Anastasiia explains how traditional methods fall short, as they fail to align resource allocation with fault tolerance needs, often resulting in system underperformance.To address this, Anastasiia introduces a novel resource-aware approach that combines operator placement and fault tolerance into a unified process. By optimizing where and how data is backed up, her solution significantly improves system reliability, especially for low-end edge devices with limited resources. The result? Up to a tenfold increase in throughput compared to existing methods. Tune to learn more! Links:Fault Tolerance Placement in the Internet of Things [SIGMOD'24]The NebulaStream Platform: Data and Application Management for the Internet of Things [CIDR'20]nebula.stream