Skip navigation

Monthly Archives: September 2018

crossroadstl;dr: We observed that Dynamic Programming is the common base of both database query optimization and reinforcement learning. Based on this, we designed a deep reinforcement learning algorithm for database query optimization we call DQ. We show that DQ is highly effective and more generally adaptable than any of the prior approaches in the database literature. We feel this is a particularly good example of AI and Database research coming together: both because of the shared algorithmic kernel, and because of the pragmatic need to resort to effective data-driven heuristics in practice. A preprint of the paper–Learning to  Optimize Join Queries With Deep Reinforcement Learning–is on arXiv; a more technical blog post is available as well.

The Database and AI communities have a long, long history of flirtation, cross-pollination, and occasional mutual dissing. The points of overlap go back to joint roots in querying (Information Retrieval and Database Queries), representations of information (Data Models and Knowledge Representation), logic-based programming (Prolog and Datalog, Production Systems and Active Databases) and applied statistics at scale (parallel query processing and machine learning).  Since the beginning, the database community has prided itself on fostering both intellectual depth and the kind of applied relevance that is reflected by multi-billion-dollar industry generation. Since the rise of Web search, the AI community has steadily moved into the realm of practicality as well. We are in another heady cross-pollination phase today, as the fields borrow and steal from each other at will.

On this note, there was excitement last year about a paper by Kraska and colleagues showing that B-tree-like indexes could be “learned” by a neural net. The core of the paper was a re-observation (see Antoshenkov’s Ranked B-Trees or Section 8.4 of the New Jersey Data Reduction Report) that a B-tree is essentially a hierarchy of distributions. From there, the novelty in the Kraska paper was to learn those distributions with a neural net, rather than the traditional algorithms and heuristics for tree construction. Neural nets are a data-driven heuristic for problems that defy algorithmic solutions. For the 1-dimensional indexing case of that paper, the classical algorithms are both asymptotically optimal and practically effective. Which leads to the question: what database systems problems merit exploration with new AI techniques?

The answer is straightforward: Query Optimization. SQL query optimization is a classical and practical piece of artifically intelligent magic: it takes a high-level user request and automatically synthesizes custom software for executing it at scale, replacing the need for COBOL (or MapReduce) programmers. The core algorithm for query optimization searches the space of potential programs via Dynamic Programming, a classic algorithmic kernel for optimization. Meanwhile, heuristic but statistically-grounded methods are used to estimate costs and rewards of various choices.

Dynamic Programming is also an important algorithm for AI search strategies. In particular, it forms the kernel of Reinforcement Learning (RL), the approach used successfully in recent famous AI game-playing milestones (Go, Atari games, etc.)

In recent work, we have been exploiting this algorithmic connection, mapping the classical System R optimization algorithm into an RL framework. The big benefit here is in developing asymptotically efficient data-driven heuristics for searching the exponential plan space. By contrast, the System R query optimizer prunes the plan space with ad-hoc heuristics that ignore the state of the database (“left-deep” trees only, postpone cross-products); moreover, after that pruning the remaining space is still exponential. Optimizers for “big” queries need to prune more aggressively, and typically do so via other data-blind heuristics. (Neumann and Radke surveyed and extended this space recently in the context of the Hyper system used inside Tableau — a highly recommended read!) Our novelty is to emply Deep RL for our pruning mechanism, in the context of Bellman’s original Dynamic Programming approach, which reduces the problem from searching an exponential space to a polynomial space. Unlike previous polynomial-time heuristics (e.g. the KBZ approach that inspired me back in graduate school), our “assumptions” are based on data and feedback, the hallmarks of learning algorithms. As a result, we are able to regularly outperform the classical heuristics and show adaptivity across a wide range of scenarios, where prior heuristics only worked in their “sweet spots”–again, a hallmark of learning.

To my mind, this is a particularly well-suited meeting point of Database and AI research: substantive cross-pollination of algorithmic roots, and an application of learning to a problem that requires data-driven, adaptive heuristics.

This work was led by the inimitable Sanjay Krishnan and Zongheng Yang, with assistance from Ken Goldberg, Ion Stoica and myself.

Credit to Mehul Shah for articulating the connection between query optimization and AI when he taught CS186 at Berkeley.

Advertisements

Over at the RISElab blog, we have a post on the latest updates to the Anna KVS. Anna started out as the fastest KVS we are aware of (by orders of magnitude!), with the widest choice of consistency models. It achieved that in part by using beaucoup de resources to replicate entire databases in memory.

Thanks to recent work in the lab, Anna is not only incredibly fast, it’s incredibly efficient and elastic too: an autoscaling, multi-tier, selectively-replicating cloud service. All that adaptivity means that Anna ramps down resource consumption for cold things, and ramps up consumption for hot things. You get all the multicore Anna performance you want, but you don’t pay for what you don’t need.

Just to throw out some numbers, we measured Anna providing 355x the performance of DynamoDB for the dollar. No, I don’t think that is because AWS is earning a 355x margin on DynamoDB! The issue is that Anna is now orders of magnitude more efficient than competing systems, in addition to being orders of magnitude faster.

Chenggang Wu and Vikram Sreekanti were the driving forces behind this most recent work. They have more to say in their blog post, in the research paper on arXiv, and in the code on github.

(PS: the Anna v0 paper at ICDE was honored with a Best of Conference selection for publication in longer form in IEEE TKDE. Congrats to Chenggang Wu on that!)