Skip navigation

Category Archives: academia

Hadoop is not healthy for children and other living things.I sat at Berkeley CS faculty lunch this past week with Brian Harvey and Dan Garcia, two guys who think hard about teaching computing to undergraduates.  I was waxing philosophical about how we need to get data-centric thinking embedded deep into the initial CS courses—not just as an application of traditional programming, but as a key frame of reference for how to think about computing per se.

Dan pointed out that he and Brian and others took steps in this direction years ago at Berkeley, by introducing MapReduce and Hadoop in our initial 61A course.  I have argued elsewhere that this is a Good Thing, because it gets people used to the kind of disorderly thinking needed for scaling distributed and data-centric systems.

But as a matter of both pedagogy and system design, I have begun to think that Google’s MapReduce model is not healthy for beginning students.  The basic issue is that Google’s narrow MapReduce API conflates logical semantics (define a function over all items in a collection) with an expensive physical implementation (utilize a parallel barrier). As it happens, many common cluster-wide operations over a collection of items do not require a barrier even though they may require all-to-all communication.  But there’s no way to tell the API whether a particular Reduce method has that property, so the runtime always does the most expensive thing imaginable in distributed coordination: global synchronization.

From an architectural point of view, a good language for parallelism should expose pipelining, and MapReduce hides it. Brian suggested I expand on this point somewhere so people could talk about it.  So here we go.

Read More »

In today’s episode of the Twilight Zone, a young William Shatner stumbles into a time machine and travels back into the past. Cornered in a dark alley, he is threatened by a teenage hooligan waving a loaded pistol. A tussle ensues, and in trying to wrest the gun from his assailant, Shatner fires, killing him dead. Examining the contents of the dead youth’s wallet, Bill comes to a shocking conclusion: he has just killed his own grandfather. Tight focus: Shatner howling soundlessly as he stares at his own hand flickering in and out of view.

Shatner? Or Not(Shatner)? Having now changed history, he could not have been born, meaning he could not have traveled back in time and changed history, meaning he was indeed born, meaning…?

You see where this goes.  It’s the old grandfather paradox, a hoary chestnut of SciFi and AI.  Personally I side with Captain Kirk: I don’t like mysteries. They give me a bellyache. But whether or not you think a discussion of “p if Not(p)” is news that’s fit to print, it is something to avoid in your software.  This is particularly tricky in distributed programming, where multiple machines have different clock settings, and those clocks may even turn backward on occasion. The theory of Distributed Systems is built on the notion of Causality, which enables programmers and programs to avoid doing unusual things like executing instructions in orders that could not have been specified by the program that generated them. Causality is established by distributed clock protocols. These protocols are often used to enforce causal orderings–i.e. to make machines wait for messages. And waiting for messages, as we know, is bad.

So I’m here to tell you today that Causality is overrated, and we can often skip the wait. To hell with distributed clocks: time travel can be fine.  In many cases it’s even fine to change history. Here’s the thing: Casuality is Required Only to control Non-monotonicity. I call this the CRON principle.

Read More »

Bright and early next Monday morning I’m giving the keynote talk at PODS, the annual database theory conference.  The topic: (a) to summarize seven years of experience using logic to build distributed systems and network protocols (including P2, DSN, and recent BOOM work), and (b) to set out some ideas about the foundations of distributed and parallel programming that fell out from that experience.

I posted the paper underlying the talk, called The Declarative Imperative: Experiences and Conjectures in Distributed Logic. It’s written for database theoreticians, and in a spirit of academic fun it’s maybe a little over the top.  But I’m hopeful that the main ideas can clarify how we think about the practice of building distributed systems, and the languages we design for that purpose.  The talk will be streamed live and archived (along with keynotes from the SIGMOD and SOCC conferences later in the week.)

Below the break is a preview of the big ideas.  I’ll post about them at more length over the next few weeks, hopefully in more practical/approachable terms than I’m using for PODS.

Read More »

I am spoiled — I get to work with a brilliant bunch of students and colleagues. They’ve been doing some really amazing research recently, and I’m happy to report that they’re getting some of the recognition they deserve:

I’ve blogged about all these projects before, and since most of them are in their initial stages I fully expect there will be more to report in future.  Meanwhile, it’s nice to see these folks getting recognized for their work, and it will be interesting to get some feedback at the conferences.

Congrats, folks!

It’s been about 6 years now that we’ve been working on declarative programming for distributed systems — starting with routing protocols, then network overlays, query optimizers, sensor network stacks, and more recently scalable analytics and consensus protocols.

Through that time, we’ve struggled to find a useful middle ground between the pure logic roots of classical declarative languages like Datalog, and the practical needs of real systems managing state across networks. Our compromises over the years allowed us to move forward, build real things, and learn many lessons. But they also led to some semantic confusion — as noted in papers by colleagues at Max Planck and AT&T.

Well, no more. We recently released a tech report on Dedalus, a formal logic language that can serve as a clean foundation for declarative programming going forward.  The Dedalus work is fairly theoretical, but having tackled it we’re in a strong position to define an approachable and appealing language that will let programmers get their work done in distributed environments. That’s the goal of our Bloom language.

The key insight in Dedalus is roughly this:

Time is essential; space is a detail.

Read More »

It’s official: the name of the programming language for the BOOM project is:  Lincoln Bloom.

I didn’t intend to post about Bloom until it was cooked, but two things happened this week that changed my plans.  The first was the completion of a tech report on Dedalus, our new logic language that forms the foundation of Bloom.  The second was more of a surprise: Technology Review decided to run an article on our work, and Bloom was the natural way to talk about it.

More soon on our initial Dedalus results.

Papers are being solicited for the ACM’s new symposium on cloud computing (SOCC) — and they’re due pretty soon, January 15. Both research and industrial papers are welcome. The folks involved (present company excepted) are really strong, and we expect to have some very interesting invited speakers as well. Interesting enough to entice folks to Indianapolis!

The best parts of these smaller symposia are the give-and-take of people in the room talking about each other’s work. So send in your best ideas and plan to come.

More info including the call for papers at http://research.microsoft.com/socc2010

argueThanks to Boon Thau Loo and Stefan Sariou for a very interesting workshop on Networking Meets Databases (NetDB), and especially for inviting a high-octane panel to debate the success and directions of Declarative Networking.

The panel members included:

  • Fred Baker, Cisco
  • Joe Hellerstein, Berkeley
  • Eddie Kohler, UCLA and Meraki
  • Arvind Krishnamurthy, U Washington
  • Petros Maniatis, Intel Research
  • Timothy Roscoe, ETH Zurich

Butler Lampson made numerous comments from the audience, and given his insight and stature was viewed by most as something of an additional panelist.

I was happy to see a very vigorous debate!  Lots of interesting points made, no punches pulled.  My slides are posted here, and include an ad hoc manifesto for how to move forward. Read More »

lightning

For the last year or so, my team at Berkeley — in collaboration with Yahoo Research — has been undertaking an aggressive experiment in programming.  The challenge is to design a radically easier programming model for infrastructure and applications in the next computing platform: The Cloud.  We call this the Berkeley Orders Of Magnitude (BOOM) project: enabling programmers to develop OOM bigger systems in OOM less code.

To kick this off we built something we call BOOM Analytics [link updated to Eurosys10 final version]: a clone of Hadoop and HDFS built largely in Overlog, a declarative language we developed some years back for network protocols.  BOOM Analytics is just as fast and scalable as Hadoop, but radically simpler in its structure.  As a result we were able — with amazingly little effort — to turbocharge our incarnation of the elephant with features that would be enormous upgrades to Hadoop’s Java codebase.  Two of the fanciest are: Read More »

1463574952_dd400430e5

[Update 1/15/2010: this paper was awarded Best Student Paper at ICDE 2010!  Congrats to Kuang, Harr and Neil on the well-deserved recognition!]

[Update 11/5/2009: the first paper on Usher will appear in the ICDE 2010 conference.]

Data quality is a big, ungainly problem that gets too little attention in computing research and the technology press. Databases pick up “bad data” — errors, omissions, inconsistencies of various kinds — all through their lifecycle, from initial data entry/acquisition through data transformation and summarization, and through integration of multiple sources.

While writing a survey for the UN on the topic of quantitative data cleaning, I got interested in the dirty roots of the problem: data entry. This led to our recent work on Usher [11/5/2009: Link updated to final version], a toolkit for intelligent data entry forms, led by Kuang Chen.

Read More »