Skip navigation

Category Archives: academia

A Berkeley Database Research Group T-shirt circa 1992.
This is an image of a t-shirt from the Berkeley Database Research Group circa 1994. According to Paul Aoki, “The turtle was adopted as a mascot by the INGRES group in the ’70s because ‘it’s slow but it gets there.’ It was retained as the POSTGRES mascot for sentimental reasons.”

The ACM began commissioning a series of reminiscence books on Turing Award winners. Thanks to hard work by editor Michael Brodie, the first one is Mike Stonebraker’s book, which just came out. I was asked to write the chapter on Postgres. I was one of the large and distinguished crew of grad students on the Postgres project, so this was fun.

ACM in its wisdom decided that these books would be published in a relatively traditional fashion—i.e. you have to pay for them. The publisher, Morgan-Claypool, has this tip for students and ACM members:

Please note that the Bitly link goes to a landing page where Students, ACM Members, and Institutions who have access to the ACM Digital Library can follow a link and look to see if they can get the book through their ACM membership. Students who are ACM members can purchase for a very low rate…the retail bookstore is more for corporate customers.

(Update 12/1/2019: the bitly link is no longer live. I have replaced it with a link to the ACM Digital Library page for the book. I was able to download the full book for free with my ACM membership; hopefully this is true for others as well.)

I insisted that my article be available for free. So they’ve decided that each book will have a “sample chapter”, and for Mike’s it’s this Postgres chapter. The ACM-formatted version is available free at the link above, but I have a standalone PDF up on arXiv and latex source on github.

If you see any errors or you think I should say more about something you know about, please submit a PR on github! I’ll update the arXiv version periodically.

Original Author: Nick Youngson - link to - http://www.nyphotographic.com/

Original Image: http://www.picpedia.org/highway-signs/c/calm.html

For folks who care about what’s possible in distributed computing: Peter Alvaro and I wrote an introduction to the CALM Theorem and subsequent work that is now up on arXiv. The CALM Theorem formally characterizes the class of programs that can achieve distributed consistency without the use of coordination.

I spent a good fraction of my academic life in the last decade working on a deeper understanding of how to program the cloud and other large-scale distributed systems. I was enormously lucky to collaborate with and learn from amazing friends over this period in the BOOM project, and see our work picked up and extended by new friends and colleagues.

Our research was motivated by simple questions, chief among them this:

Q: “What is the hardest thing about distributed systems?”
A: “Coordination and consistency.”

Protocols like Two-Phase Commit, Paxos and their myriad offspring are celebrated for being tricky, and as such form the backbone of academic classes on distributed computing. But trickiness is not a hallmark of good software design. In practice, coordination is the source of much of the complexity and inefficiency of distributed systems, and it is avoided when possible by good engineers.

So we moved to a more fundamental question:

Q: When can we correctly avoid coordination, and when are we absolutely required to use it?
A (circa 2010): Unknown.

Surprisingly, this computability question was one that the pioneers of distributed systems never answered, at least not in any sense of algorithms or program semantics. The discussion in the literature was cast in terms of “memory models” or “storage consistency” guarantees so low down the stack as to be irrelevant and unhelpful to most application designers.

In a keynote talk at PODS 2010, I proposed an answer to this open question. I conjectured—based on my team’s experience with streaming queries and declarative networking—that coordination was needed if and only if you had a computational task that could not be expressed with monotonic logic. I called this idea CALM: Consistency as Logical Monotonicity. Not long thereafter a formalization and proof of the CALM Theorem was provided by Ameloot, Neven and Van den Bussche over at Hasselt University in Belgium. Related work ensued across both sides of the Atlantic on additional theoretical results and practical uses of the idea for program analysis.

I sense that this body of work deserves more attention today, when distributed computing is becoming the norm rather than the exception. CALM provides a formal basis for a myriad of conversations over the last 15 years regarding what is possible to get correct with “eventual consistency”, “noSQL”, “commutativity”, “ACID 2.0”, “CRDTs” and other pragmatics. It provides the nuanced answer to screeds about “beating the CAP Theorem”. It also lays the groundwork for what we did later with the Bloom language: provide a programming model where the really hard issues of distributed programming are first-order concerns of the language and its syntax.

To bring these issues to a wider audience, I sat down with the inimitable Peter Alvaro to write up what we hope is an approachable but sufficiently meaty intro to the CALM Theorem, its implications, and the many open questions remaining. It took a while for this to get to the top of our stacks, but the paper is now up on arXiv.

We’re spinning up a new generation of work on cloud programming here at Berkeley’s RISELab that builds on these lessons. Watch this space!

ANHU-1024x768-Alan-VernonThere’s fast and there’s fast. This post is about Anna*, a key/value database design from our team at Berkeley that’s got phenomenal speed and buttery smooth scaling, with an unprecedented range of consistency guarantees. Details are in our upcoming ICDE18 paper on Anna.

Conventional wisdom (or at least Jeff Dean wisdom) says that you have to redesign your system every time you scale by 10x. As researchers, we asked the counter-cultural question:

What would it take to build a key-value store that would excel across many orders of magnitude of scale, from a single multicore box to the global cloud?

Turns out this kind of curiosity can lead to a system with pretty interesting practical implications.

Read More »

2000px-Typing_monkey.svgAs mentioned in my previous post, Peter Alvaro turned in his PhD thesis a month back, and is now in full swing as a professor at UC Santa Cruz. In the midst of that nifty academic accomplishment, he succeeded in taking the last chapter of his thesis from our BOOM project out of the ivory tower and into use at Netflix. Peter’s collaborators at Netflix recently blogged about the use of his ideas alongside their (very impressive) testing infrastructure, famously known as ChaosMonkey, a.k.a. the Netflix Simian Army.  This also generated some press, vaguely inappropriate headline and all.  Their work raised the flag on five potential failure scenarios in a single request interface.  That’s nice.  Even nicer is what would have happened without the ideas from Peter’s research:

Brute force exploration of this space would take 2^100 iterations (roughly 1 with 30 zeros following), whereas our approach was able to explore it in ~200 experiments.

It always feels good when your good ideas carve 28 zeroes of performance off the end of standard practice.

I encourage you to read the Netflix blog post, as well as Peter’s paper on the research prototype, Molly.  Or you can watch his RICON 2015 keynote on the subject.

computer on fireA major source of frustration in distributed programming is that contemporary software tools—think compilers and debuggers—have little to say about the really tricky bugs that distributed systems developers face.  Sure, compilers can find type and memory errors, and debuggers can single-step you through sequential code snippets. But how do they help with distributed systems issues?  In some sense, they don’t help at all with the stuff that matters—things like:

  • Concurrency: Does your code have bugs due to race conditions?  Don’t forget that a distributed system is a parallel system!
  • Consistency: Are there potential consistency errors in your program due to replicated state? Can you get undesirable non-deterministic outcomes based on network delays?  What about the potential for the awful “split-brain” scenario where the state of multiple machines gets irrevocably out of sync?
  • Coordination performance: Do you have performance issues due to overly-aggressive coordination or locking? Can you avoid expensive coordination without incurring bugs like the ones above?

These questions are especially tricky if you use services or libraries, where you don’t necessarily know how state and communication are managed.  What code can you trust, and what about that code do you need to know to trust it?

Peter Alvaro has been doing groundbreaking work in the space, and recently started taking the veil off his results.  This is a big deal. Read More »

Bill Marczak, right, in NY Times

Bill Marczak, a PhD student in my group, does interesting research on algebraic programming languages, which I hope to describe in more detail here soon.

But Bill has recently received significant attention for work he did in his spare time—a dramatically successful cyber-espionage effort to expose government misuse of commercial surveillance software in Bahrain, the nation where Bill attended high school. The story picked up major press coverage in venues including Bloomberg and the New York Times, which also ran a more detailed article in their Bits Blog.

I’m always happy to see the press pick up on my students’ work, but this one is special.

Matt Welsh of Google—formerly of Harvard, Berkeley and Cornell—is a deservedly well-read blogger in the computing community.  He’s also somebody I’ve admired since his early days in grad school as a smart, authentic person.

Matt’s been working through his transition from Harvard Professor to Googler in public over the last year or so, and it’s been interesting to watch what he says, and the discussion it provokes.  His latest post was a little more acid than usual though, with respect to the value of academic computer science.  My response got pretty long, and in the end I figured it’d be better to toss it up in my own space.

Matt:

Rather than run down work you don’t like—including maybe your own prior work, as assessed on one of your dark days—think about the academic work over the last 50 years that you admire the hell out of. I know you could name a few heroes. I bet a bunch of your blog’s readers could get together and name a whole lot more. Now imagine the university system hadn’t been around and reasonably well-funded at the time, because it was considered “inefficient when it comes to producing real products that shape the world”.   It’s sad to consider.

Here’s another thing you and your readers should consider: Forget efficiency. At least, forget it on the timescale you measure in your current job. Instead, aspire to do work that is as groundbreaking and important as the best work in the history of the field. And at the same time, inspire generations of brilliant students to do work that is even better—better than your very best. That’s what great universities are for, Matt. Remember? Sure you do. And yes—it’s goddamn audacious. As well it should be.

Read More »

When the folks at ACM SIGMOD asked me to be a guest blogger this month, I figured I should highlight the most community-facing work I’m involved with.  So I wrote up a discussion of MADlib, and that the fact that this open-source in-database analytics library is now open to community contributions. (A bunch of us recently wrote a paper on the design and use of MADlib, which made my writing job a bit easier.) I’m optimistic about MADlib closing a gap between algorithm researchers and working data scientists, using familiar SQL as a vector for adoption on both fronts.

Read More »

CopyIf you follow this blog, you know that my BOOM group has spent a lot of time in the past couple years formalizing eventual consistency (EC) for distributed programs, via the CALM theorem and practical tools for analyzing Bloom programs.

In recent months, my student Peter Bailis and his teammate Shivaram Venkataraman took a different tack on the whole EC analysis problem which they call PBS: Probabilistically Bounded Staleness. The results are interesting, and extremely relevant to current practice.  (See, for example, the very nice blog post by folks at DataStax).

Many people today deal with EC in the specific context of replica consistency, particularly in distributed NoSQL-style Key-Value Stores (KVSs). It is typical to configure these stores with so-called “partial” quorum replication, to get a comfortable mix of low latency with reasonable availability. The term “partial” signifies that you are not guaranteed consistency of writes by these configurations — at best they guarantee a form of eventual consistency of final writes, but readers may well read stale data along the way. Lots of people are deploying these configurations in the field, but there’s little information on how often the approach messes up, and how badly.

Jumping off from earlier theoretical work on probabilistic quorum systems, Peter and Shivaram answered two natural questions about how these systems should perform in current practice:

  1. How many versions ago?  On expectation, if you do a read in a partial-quorum KVS, how many versions behind are you? Peter and Shivaram answer this one definitively, via a closed-form mathematical analysis.
  2. How stale on the (wall-)clock?  On expectation, if you do a read in a partial-quorum KVS, how out-of-date will your version be in terms of wall-clock time? Answering this one requires modeling a read/write workload in wall-clock time, as well as system parameters like replica propagation (“anti-entropy”). Peter and Shivaram address this with a Monte Carlo model, and run the model with parameters grounded in real-world performance numbers generously provided by two of our most excellent colleagues: Alex Feinberg at LinkedIn and Coda Hale at Yammer (both of whom also guest-lectured in my Programming the Cloud course last fall.)  Peter and Shivaram validated their models in practice using Cassandra, a widely-used KVS.

On the whole, PBS shows that being sloppy about consistency doesn’t bite you often or badly — especially if you’re in a single datacenter and you use SSDs. But things get more complex with magnetic disks, garbage collection delays (grr), and wide-area replication.

Interested in more detail?  You can check out two things:

For a class I’m teaching, I’d like to collect a list of favorite “maxims” or “aphorisms” for computer systems.

I’d be very grateful if you would add your favorites below to the comments, preferably with a link to a source that either introduces or references the maxim.  It’s OK to agree or disagree with the maxim.

I’d enjoy seeing people’s support/critiques for these below as well — may merit more focused posts another day.

Examples:

What else?