Skip navigation

Category Archives: research

2000px-Typing_monkey.svgAs mentioned in my previous post, Peter Alvaro turned in his PhD thesis a month back, and is now in full swing as a professor at UC Santa Cruz. In the midst of that nifty academic accomplishment, he succeeded in taking the last chapter of his thesis from our BOOM project out of the ivory tower and into use at Netflix. Peter’s collaborators at Netflix recently blogged about the use of his ideas alongside their (very impressive) testing infrastructure, famously known as ChaosMonkey, a.k.a. the Netflix Simian Army.  This also generated some press, vaguely inappropriate headline and all.  Their work raised the flag on five potential failure scenarios in a single request interface.  That’s nice.  Even nicer is what would have happened without the ideas from Peter’s research:

Brute force exploration of this space would take 2^100 iterations (roughly 1 with 30 zeros following), whereas our approach was able to explore it in ~200 experiments.

It always feels good when your good ideas carve 28 zeroes of performance off the end of standard practice.

I encourage you to read the Netflix blog post, as well as Peter’s paper on the research prototype, Molly.  Or you can watch his RICON 2015 keynote on the subject.

Advertisements

computer on fireA major source of frustration in distributed programming is that contemporary software tools—think compilers and debuggers—have little to say about the really tricky bugs that distributed systems developers face.  Sure, compilers can find type and memory errors, and debuggers can single-step you through sequential code snippets. But how do they help with distributed systems issues?  In some sense, they don’t help at all with the stuff that matters—things like:

  • Concurrency: Does your code have bugs due to race conditions?  Don’t forget that a distributed system is a parallel system!
  • Consistency: Are there potential consistency errors in your program due to replicated state? Can you get undesirable non-deterministic outcomes based on network delays?  What about the potential for the awful “split-brain” scenario where the state of multiple machines gets irrevocably out of sync?
  • Coordination performance: Do you have performance issues due to overly-aggressive coordination or locking? Can you avoid expensive coordination without incurring bugs like the ones above?

These questions are especially tricky if you use services or libraries, where you don’t necessarily know how state and communication are managed.  What code can you trust, and what about that code do you need to know to trust it?

Peter Alvaro has been doing groundbreaking work in the space, and recently started taking the veil off his results.  This is a big deal. Read More »

Photo Credit: Karthick R via Compfight cc

We just finished writing up an overview of our most recent thinking about distributed consistency. The paper is entitled Consistency Without Borders, and it’s going to appear in the ACM SoCC conference next month in Silicon Valley.

It starts with two things we all know:

  1. Strong distributed consistency can be expensive and dangerous. (My favorite exposition: the LADIS ’08 conference writeup by Birman, Chockler and van Renesse. See especially the quotes from James Hamilton and Randy Shoup. And note that recent work like Spanner changes little: throughput of 10’s to 100’s of updates per second is only useful at the fringes.)
  2. Managing coordination in application logic is fraught with software engineering peril: you have to spec, build, test and maintain special-case, cross-stack distributed reasoning over time. Here be dragons.

The point of the paper is to try to reorient the community to explore the design space in between these extremes. Distributed consistency is one of the biggest CS problems of our day, and the technical community is spending way too much of its energy at these two ends of the design space.

We’ll be curious to hear feedback here, and at the conference.

Matt Welsh of Google—formerly of Harvard, Berkeley and Cornell—is a deservedly well-read blogger in the computing community.  He’s also somebody I’ve admired since his early days in grad school as a smart, authentic person.

Matt’s been working through his transition from Harvard Professor to Googler in public over the last year or so, and it’s been interesting to watch what he says, and the discussion it provokes.  His latest post was a little more acid than usual though, with respect to the value of academic computer science.  My response got pretty long, and in the end I figured it’d be better to toss it up in my own space.

Matt:

Rather than run down work you don’t like—including maybe your own prior work, as assessed on one of your dark days—think about the academic work over the last 50 years that you admire the hell out of. I know you could name a few heroes. I bet a bunch of your blog’s readers could get together and name a whole lot more. Now imagine the university system hadn’t been around and reasonably well-funded at the time, because it was considered “inefficient when it comes to producing real products that shape the world”.   It’s sad to consider.

Here’s another thing you and your readers should consider: Forget efficiency. At least, forget it on the timescale you measure in your current job. Instead, aspire to do work that is as groundbreaking and important as the best work in the history of the field. And at the same time, inspire generations of brilliant students to do work that is even better—better than your very best. That’s what great universities are for, Matt. Remember? Sure you do. And yes—it’s goddamn audacious. As well it should be.

Read More »

When the folks at ACM SIGMOD asked me to be a guest blogger this month, I figured I should highlight the most community-facing work I’m involved with.  So I wrote up a discussion of MADlib, and that the fact that this open-source in-database analytics library is now open to community contributions. (A bunch of us recently wrote a paper on the design and use of MADlib, which made my writing job a bit easier.) I’m optimistic about MADlib closing a gap between algorithm researchers and working data scientists, using familiar SQL as a vector for adoption on both fronts.

Read More »

CopyIf you follow this blog, you know that my BOOM group has spent a lot of time in the past couple years formalizing eventual consistency (EC) for distributed programs, via the CALM theorem and practical tools for analyzing Bloom programs.

In recent months, my student Peter Bailis and his teammate Shivaram Venkataraman took a different tack on the whole EC analysis problem which they call PBS: Probabilistically Bounded Staleness. The results are interesting, and extremely relevant to current practice.  (See, for example, the very nice blog post by folks at DataStax).

Many people today deal with EC in the specific context of replica consistency, particularly in distributed NoSQL-style Key-Value Stores (KVSs). It is typical to configure these stores with so-called “partial” quorum replication, to get a comfortable mix of low latency with reasonable availability. The term “partial” signifies that you are not guaranteed consistency of writes by these configurations — at best they guarantee a form of eventual consistency of final writes, but readers may well read stale data along the way. Lots of people are deploying these configurations in the field, but there’s little information on how often the approach messes up, and how badly.

Jumping off from earlier theoretical work on probabilistic quorum systems, Peter and Shivaram answered two natural questions about how these systems should perform in current practice:

  1. How many versions ago?  On expectation, if you do a read in a partial-quorum KVS, how many versions behind are you? Peter and Shivaram answer this one definitively, via a closed-form mathematical analysis.
  2. How stale on the (wall-)clock?  On expectation, if you do a read in a partial-quorum KVS, how out-of-date will your version be in terms of wall-clock time? Answering this one requires modeling a read/write workload in wall-clock time, as well as system parameters like replica propagation (“anti-entropy”). Peter and Shivaram address this with a Monte Carlo model, and run the model with parameters grounded in real-world performance numbers generously provided by two of our most excellent colleagues: Alex Feinberg at LinkedIn and Coda Hale at Yammer (both of whom also guest-lectured in my Programming the Cloud course last fall.)  Peter and Shivaram validated their models in practice using Cassandra, a widely-used KVS.

On the whole, PBS shows that being sloppy about consistency doesn’t bite you often or badly — especially if you’re in a single datacenter and you use SSDs. But things get more complex with magnetic disks, garbage collection delays (grr), and wide-area replication.

Interested in more detail?  You can check out two things:

For a class I’m teaching, I’d like to collect a list of favorite “maxims” or “aphorisms” for computer systems.

I’d be very grateful if you would add your favorites below to the comments, preferably with a link to a source that either introduces or references the maxim.  It’s OK to agree or disagree with the maxim.

I’d enjoy seeing people’s support/critiques for these below as well — may merit more focused posts another day.

Examples:

What else?

Today was a big day in the BOOM group: we launched the alpha version of Bud: Bloom Under Development. If you’re new to this blog, Bloom is our new programming language for cloud computing and other distributed systems settings. Bud is the first fully-functional release of Bloom, implemented as a DSL in Ruby.

I’ve written a lot about Bloom in research papers and on the new Bloom website, and I have lots to say about distributed programming that I won’t recap. Instead, I want to focus here on the tangible: working code. If you’re looking for something serious, check out the walkthrough of the bfs distributed filesystem, a GFS clone. But to get the flavor, consider the following two lines of code, which implement what you might consider to be “hello, world” for distributed systems: a chat server.

nodelist <= connect.payloads
mcast <~ (mcast * nodelist).pairs { |m,n| [n.key, m.val] }

That’s it.

The first line says “if you get a message on a channel called ‘connect’, remember the payload in a table called ‘nodelist'”. The second says “if you get a message on the ‘mcast’ channel, then forward its contents to each address stored in ‘nodelist'”. That’s all that’s needed for a bare-bones chat server.  Nice, right?

Read More »

I often hear that many of the leading data analysts in the field have PhDs in physics or biology or the like, rather than computer science.  Computer scientists are typically interested in methods; physical scientists are interested in data.

Another thing I often hear is that a large fraction of the time spent by analysts — some say the majority of time — involves data preparation and cleaning: transforming formats, rearranging nesting structures, removing outliers, and so on.  (If you think this is easy, you’ve never had a stack of ad hoc Excel spreadsheets to load into a stat package or database!)

Putting these together, something is very wrong:  high-powered people are wasting most of their time doing low-function work.  And the challenge of improving this state of affairs has fallen in the cracks between the analysts and computer scientists.

DataWrangler is a new tool we’re developing to address this problem, which I demo’d today at the O’Reilly Strata Conference.  DataWrangler is an intelligent visual data transformation tool that lets users reshape, transform and clean data in an intuitive way that surprises most people who’ve worked with data.  As you manipulate data in a grid layout, the tool automatically infers information both about the data, and about your intentions for transforming the data.  It’s hard to describe, but the lead researcher on the project — Stanford PhD student Sean Kandel — has a quick video up on the DataWrangler homepage that shows how it works.  Sean has put DataWrangler live on the site as well.

Tackling these problems fundamentally requires a hybrid technical strategy.  Under the covers, DataWrangler is a heady mix of second-order logic, machine learning methods, and human-computer interaction design methodology.   We wrote a research paper about it that will appear in this year’s SIGCHI.

If you’re interested in this space, also have a look at Shankar Raman’s very prescient Potter’s Wheel work from a decade ago, the PADS project at AT&T and Princeton, recent research from Sumit Gulwani at Microsoft Research, and David Huynh’s most excellent Google Refine.  All good stuff!

In today’s episode of the Twilight Zone, a young William Shatner stumbles into a time machine and travels back into the past. Cornered in a dark alley, he is threatened by a teenage hooligan waving a loaded pistol. A tussle ensues, and in trying to wrest the gun from his assailant, Shatner fires, killing him dead. Examining the contents of the dead youth’s wallet, Bill comes to a shocking conclusion: he has just killed his own grandfather. Tight focus: Shatner howling soundlessly as he stares at his own hand flickering in and out of view.

Shatner? Or Not(Shatner)? Having now changed history, he could not have been born, meaning he could not have traveled back in time and changed history, meaning he was indeed born, meaning…?

You see where this goes.  It’s the old grandfather paradox, a hoary chestnut of SciFi and AI.  Personally I side with Captain Kirk: I don’t like mysteries. They give me a bellyache. But whether or not you think a discussion of “p if Not(p)” is news that’s fit to print, it is something to avoid in your software.  This is particularly tricky in distributed programming, where multiple machines have different clock settings, and those clocks may even turn backward on occasion. The theory of Distributed Systems is built on the notion of Causality, which enables programmers and programs to avoid doing unusual things like executing instructions in orders that could not have been specified by the program that generated them. Causality is established by distributed clock protocols. These protocols are often used to enforce causal orderings–i.e. to make machines wait for messages. And waiting for messages, as we know, is bad.

So I’m here to tell you today that Causality is overrated, and we can often skip the wait. To hell with distributed clocks: time travel can be fine.  In many cases it’s even fine to change history. Here’s the thing: Casuality is Required Only to control Non-monotonicity. I call this the CRON principle.

Read More »