Skip navigation

Cross-posted from the Berkeley RISELab Blog; comments should go there…

A key aspect of the RISELab agenda is to aggressively harness data—lots of it, both historical and live. Of course bits in computers don’t provide value on their own. We need a broader context for data: where it came from, what it represents, and how it gets used. Traditionally, people called this metadata: the data about our data.

Requirements for metadata have changed drastically in recent years in response to technology trends. There’s an emerging groundswell to address these new requirements and explore new opportunities. This includes our work on the broader notion of data context in the Ground system.

Metadata Megafail!How should data-driven organizations respond to these changing requirements?  In the tradition of Berkeley advice like how to build a bad research center and how to give a bad talk, let me offer some pointedly lousy ideas for a future-facing metadata strategy.  Hopefully by reading this and doing the opposite, you’ll be walking the path toward a healthy metadata future.

Without further ado, here are 3 easy steps to Megafail with Metadata.

3 Steps to Metadata Megafail

Step 1. Metadata First

Reject any data that doesn’t come well-prepared with metadata in advance.

There’s a famous old slogan in computing: “Garbage In, Garbage Out”, a.k.a. #GIGO. What a great slogan! Wikipedia dates it back to 1957, and like many ideas from back then, it’s time to bring it back.

How can you tell if you have garbage data coming in? It breaks the rules on data format, content or sourcing. Which means it’s critical to formalize those rules in advance of loading any data, a strategy I like to call #MetadataFirst.  

What’s that you say? 99% of your data arrives without meaningful metadata? My goodness—stay vigilant! If you have data with any whiff of garbage, then by all means throw that data away.

Now you might say that it’s a lot cheaper to store data now than it was in 1957. So why not keep data around even if it’s dirty and poorly-described? And you might say that analysts armed with AI-assisted data wrangling and analytics techniques can extract valuable signals from noisy raw data. So why not “figure out” useful metadata over time

Sure, you might say those things. But you shouldn’t! Because really… can you guarantee that those “signals” you’re extracting are true? You can’t! There are no airtight guarantees to be had unless they were enforced at the time of data collection. Because if it was garbage coming in, well … you know the saying.

#GIGO #MetadataFirst!

Step 2. Lock it up, Lock it in.

Deploy a metadata system that is as inflexible and proprietary as possible.

As we know, metadata is a critical asset: the key to enabling people to discover relevant data, assess its content, and figure out how to make use of it. A critical asset like metadata should not be left lying around. It should be locked away in a safe place, where it is difficult to access. I call that place #MetaJail.

You’re probably wondering how you can make a great metajail—one that renders your metadata really inaccessible. I’m here to help.

To begin, a good metajail should be prescriptive: it should impose a complex model that all metadata must conform to. This ensures that a diverse range of organizations—from biolabs to banks to bureaucracies—can all have an equally difficult time coercing their metadata into the provided model.

A great metajail is not only prescriptive, it’s proprietary. Vendor-specific solutions ensure both that metadata is locked up, and that the larger organization is locked in to the vendor as well. Even better is to put your metajail in a proprietary smokestack. Choose a vendor whose metajail only works with their other components: prep, ingest, storage, analytics, charting, etc. This is like a MetaPenalColony!

I hope it goes without saying that you should be the warden of the metajail. Wherever possible, require manual approvals for authorization. Then people who need to use data will have to go through you to get access. This maximizes your control over metadata, and hence your control over data, and hence your control over people! All of which increases your job security and keeps change at bay.

#MetaJail for the #MegaFail!

Step 3. One Truth

Ensure that the metadata enforces a single interpretation of data.

Traditional metadata solutions are often characterized as the Single Source of Truth for an organization. This is another great slogan from the past. #SSOT!

I like everything about this slogan. “SS” gets things started on an aggressive note, emphasizing central control in a single source (#MetaJail!) and rejection of any unapproved sources (#GIGO!).

But this step is not just a rehash of the previous two; it adds the final T: “Truth”. Now there’s a word that we don’t hear enough of in modern computing. The terrific thing about managing the Truth is that we get to define it. Remember, organizations are trying to do more and more with data, across an increasing number of use cases. Resist this trend! It is mayhem in the making. Consider a humble log file from a webserver. The Marketing department uses it to optimize the website layout. The IT department wants to use it to understand and predict server downtimes. The Online Services department wants to use it to build a product recommender system. Each one of these groups wants to extract different features and schemas from the log file, resulting in different metadata.

As you may know, Chief Marketing Officers supposedly spend more on IT than CIOs. (Which means, one assumes, that Marketing should control the Truth for your organization.) Hence in our example, the metadata describing the web logs should be defined by the Marketing department, and the log files should be transformed to suit that use case. Other representations of that information should be rejected as metadata Falsehoods.

So mandate a #SSOT, and manage Truth. Ask yourself: are you #SSoTOrNot?

Better Advice

At the risk of stating the obvious … all of the above steps are terrible ideas. But sometimes it’s good to rule some things out before we decide what we should do.

Want to know more about our serious ideas for metadata management? Have a look at our initial paper on Ground from CIDR 2017As a brief excerpt, here are some design requirements for evaluating a modern metadata or data context system. It should be:

    1. Model-Agnostic. For broad adoption and usage, a data context system cannot impose opinions on metadata modeling.
    2. Relativistic. It should be possible to record multiple interpretations of the same data, allowing users and applications to agree to disagree.
    3. Politically Neutral. The system should not be controlled by a single vendor or interest group. Like Internet protocols, data context is a “narrow waist”. To succeed, it must interoperate with a wide range of services and systems from third parties, some of which have yet to be born.
    4. Immutable. Data context must be immutable and versioned; updating stored context in place is tantamount to erasing history.
    5. Scalable. It is a common misconception that metadata is small. In many settings, the data context is far larger than the data itself.

Ground’s design is based on collaborations with colleagues from the field including Awake Networks, Capital One, Cloudera, Dataguise, LinkedIn, SkyHigh Networks and Trifacta. Slides from the conference talk are available as well.

2000px-Typing_monkey.svgAs mentioned in my previous post, Peter Alvaro turned in his PhD thesis a month back, and is now in full swing as a professor at UC Santa Cruz. In the midst of that nifty academic accomplishment, he succeeded in taking the last chapter of his thesis from our BOOM project out of the ivory tower and into use at Netflix. Peter’s collaborators at Netflix recently blogged about the use of his ideas alongside their (very impressive) testing infrastructure, famously known as ChaosMonkey, a.k.a. the Netflix Simian Army.  This also generated some press, vaguely inappropriate headline and all.  Their work raised the flag on five potential failure scenarios in a single request interface.  That’s nice.  Even nicer is what would have happened without the ideas from Peter’s research:

Brute force exploration of this space would take 2^100 iterations (roughly 1 with 30 zeros following), whereas our approach was able to explore it in ~200 experiments.

It always feels good when your good ideas carve 28 zeroes of performance off the end of standard practice.

I encourage you to read the Netflix blog post, as well as Peter’s paper on the research prototype, Molly.  Or you can watch his RICON 2015 keynote on the subject.

berkeleysunIt’s been a while since I’ve taken the time to write a blog post here. If there’s one topic that deserves a catchup post in the last few months, it’s the end of an era for my former students Peter Alvaro and Peter Bailis—henceforth Professor Peter A of UC Santa Cruz, and Professor Peter B of Stanford. Each of them officially turned in their dissertation in December. Both spanned an impressive range of theory, practice and genuine applicability.  There’s tons of good stuff in each document. Here’s a bit of an overview with references for those of you who might not be diving in to read them cover-to-cover.

Peter Alvaro was a pillar of my BOOM research project from its inception. His thesis is entitled Data-centric Programming for Distributed Systems, and it covers a beautiful arc of results:

  • It starts with his insightful exploration of commit and consensus protocols in a declarative language, and his collaboration on the BOOM Analytics work that build a ridiculously high-function HDFS clone in ridiculously few lines of code and hours of developer time
  • It also includes his foundational design of the Dedalus logic for distributed programming that has become a touchstone for the database theory community, in addition to our team
  • and his contributions to the Bloom language including the core semantics and many pragmatic features
  • It covers in depth his work on the Blazes system for analyzing eventual consistency at the level of program semantics both for Bloom and for dataflow languages like Storm, and automatically synthesizing coordination code where needed including a high-performance solution called sealing
  • and finally it presents his work on Lineage Driven Fault Injection (LDFI) and the Molly prototype, which extracted new benefits from declarative programming in large-scale testing, and was recently adapted for use at Netflix.

The thesis leaves out a bunch of additional work he did at Berkeley, including contributions to the much-cited MapReduce Online effort, and his work on distributed system testing with BloomUnit. But what I’ll remember most from his graduate years is the team-teaching we did on Programming the Cloud, where we used our work on Bloom to get undergraduates learning the fundamentals of distributed systems via live coding. This was without question the most creative and audacious teaching I’ve been involved with, and it worked surprisingly well thanks in large part to Peter’s hard work and more importantly his warm and thoughtful spirit. I’m excited to see Peter A teaching it again this coming quarter at UC Santa Cruz.

Peter Bailis’ thesis is called Coordination Avoidance in Distributed Databases, and it’s a timely tour de force of fertile ideas found in what many considered a picked-over wasteland—transaction processing. Peter’s thesis includes a range of big ideas married to practical observations, including:

  • An empirical level-set on the costs of coordination in modern distributed databases.
  • The notion of Invariant Confluence, which attacks the distributed database problem of consistency  without coordination by taking Church-Rosser graphs and applying them to databases with invariants.
  • An analysis of Invariant Confluence in the wild, via mining Github repos with Ruby on Rails apps to determine how “real” programmers tradeoff application constraints and database constraints.  Not only did Peter do the legwork here to understand what programmers do, he brought it home to force us all to ask the questions of why  they do what they do, and how the push and pull of technical communities can lead to better outcomes.
  • A new and very sensible (if you’re into that kind of thing) weak isolation level for transactions called Read Atomic, with a range of efficient implementations for distributed systems via RAMP protocols.

Peter B’s thesis also leaves out a range of important work he did at Berkeley, including the popular PBS statistical-empirical explanation of why NoSQL stores seem to work, his bolt-on causal consistency work and analysis, the initial design of the Velox model-serving system with colleagues in the AMPLab, and his popular shaming of the SQL transaction world by exposing how few SQL systems provide ACID transactions by default (or at all). I remember with gratitude how Peter took on half the work of teaching graduate databases at Berkeley (the first offering in years!) while I was deeply involved in running Trifacta. And finally, it has been a bracing dose of research and academic politics having him join in the latest edition of Readings in Database Systems; he did it with grace and intelligence.

Without question the best part of teaching at Berkeley is the students you get to work with. Peter & Peter: it has been a great pleasure. I suspect that being colleagues could be even more fun. Good to have you both still in town!

computer on fireA major source of frustration in distributed programming is that contemporary software tools—think compilers and debuggers—have little to say about the really tricky bugs that distributed systems developers face.  Sure, compilers can find type and memory errors, and debuggers can single-step you through sequential code snippets. But how do they help with distributed systems issues?  In some sense, they don’t help at all with the stuff that matters—things like:

  • Concurrency: Does your code have bugs due to race conditions?  Don’t forget that a distributed system is a parallel system!
  • Consistency: Are there potential consistency errors in your program due to replicated state? Can you get undesirable non-deterministic outcomes based on network delays?  What about the potential for the awful “split-brain” scenario where the state of multiple machines gets irrevocably out of sync?
  • Coordination performance: Do you have performance issues due to overly-aggressive coordination or locking? Can you avoid expensive coordination without incurring bugs like the ones above?

These questions are especially tricky if you use services or libraries, where you don’t necessarily know how state and communication are managed.  What code can you trust, and what about that code do you need to know to trust it?

Peter Alvaro has been doing groundbreaking work in the space, and recently started taking the veil off his results.  This is a big deal. Read More »

Photo Credit: Karthick R via Compfight cc

We just finished writing up an overview of our most recent thinking about distributed consistency. The paper is entitled Consistency Without Borders, and it’s going to appear in the ACM SoCC conference next month in Silicon Valley.

It starts with two things we all know:

  1. Strong distributed consistency can be expensive and dangerous. (My favorite exposition: the LADIS ’08 conference writeup by Birman, Chockler and van Renesse. See especially the quotes from James Hamilton and Randy Shoup. And note that recent work like Spanner changes little: throughput of 10’s to 100’s of updates per second is only useful at the fringes.)
  2. Managing coordination in application logic is fraught with software engineering peril: you have to spec, build, test and maintain special-case, cross-stack distributed reasoning over time. Here be dragons.

The point of the paper is to try to reorient the community to explore the design space in between these extremes. Distributed consistency is one of the biggest CS problems of our day, and the technical community is spending way too much of its energy at these two ends of the design space.

We’ll be curious to hear feedback here, and at the conference.

The big news around here today is the public announcement of Trifacta, a company I’ve been quietly cooking over the last few months with colleagues Jeff Heer and Sean Kandel of Stanford. Trifacta is taking on an important and satisfying challenge: to build a new generation of user-centric data management software that is beautiful, powerful, and eminently useful.

Before I talk more about the background let me say this: We Are Hiring. We’re looking for people with passion and talent in Interaction Design, Data Visualization, Databases, Distributed Systems, Languages, and Machine Learning. We’re looking for folks who want to reach across specialties, and work together to build integrated, rich, and deeply satisfying software. We’ve got top-shelf funding and a sun-soaked office in the heart of SOMA in San Francisco, and we’re building a company with clear, tangible value. It’s early days and the fun is ahead. If you ever considered joining a data startup, this is the one. Get in touch.

Read More »

Bill Marczak, right, in NY Times

Bill Marczak, a PhD student in my group, does interesting research on algebraic programming languages, which I hope to describe in more detail here soon.

But Bill has recently received significant attention for work he did in his spare time—a dramatically successful cyber-espionage effort to expose government misuse of commercial surveillance software in Bahrain, the nation where Bill attended high school. The story picked up major press coverage in venues including Bloomberg and the New York Times, which also ran a more detailed article in their Bits Blog.

I’m always happy to see the press pick up on my students’ work, but this one is special.

Matt Welsh of Google—formerly of Harvard, Berkeley and Cornell—is a deservedly well-read blogger in the computing community.  He’s also somebody I’ve admired since his early days in grad school as a smart, authentic person.

Matt’s been working through his transition from Harvard Professor to Googler in public over the last year or so, and it’s been interesting to watch what he says, and the discussion it provokes.  His latest post was a little more acid than usual though, with respect to the value of academic computer science.  My response got pretty long, and in the end I figured it’d be better to toss it up in my own space.


Rather than run down work you don’t like—including maybe your own prior work, as assessed on one of your dark days—think about the academic work over the last 50 years that you admire the hell out of. I know you could name a few heroes. I bet a bunch of your blog’s readers could get together and name a whole lot more. Now imagine the university system hadn’t been around and reasonably well-funded at the time, because it was considered “inefficient when it comes to producing real products that shape the world”.   It’s sad to consider.

Here’s another thing you and your readers should consider: Forget efficiency. At least, forget it on the timescale you measure in your current job. Instead, aspire to do work that is as groundbreaking and important as the best work in the history of the field. And at the same time, inspire generations of brilliant students to do work that is even better—better than your very best. That’s what great universities are for, Matt. Remember? Sure you do. And yes—it’s goddamn audacious. As well it should be.

Read More »

When the folks at ACM SIGMOD asked me to be a guest blogger this month, I figured I should highlight the most community-facing work I’m involved with.  So I wrote up a discussion of MADlib, and that the fact that this open-source in-database analytics library is now open to community contributions. (A bunch of us recently wrote a paper on the design and use of MADlib, which made my writing job a bit easier.) I’m optimistic about MADlib closing a gap between algorithm researchers and working data scientists, using familiar SQL as a vector for adoption on both fronts.

Read More »

CopyIf you follow this blog, you know that my BOOM group has spent a lot of time in the past couple years formalizing eventual consistency (EC) for distributed programs, via the CALM theorem and practical tools for analyzing Bloom programs.

In recent months, my student Peter Bailis and his teammate Shivaram Venkataraman took a different tack on the whole EC analysis problem which they call PBS: Probabilistically Bounded Staleness. The results are interesting, and extremely relevant to current practice.  (See, for example, the very nice blog post by folks at DataStax).

Many people today deal with EC in the specific context of replica consistency, particularly in distributed NoSQL-style Key-Value Stores (KVSs). It is typical to configure these stores with so-called “partial” quorum replication, to get a comfortable mix of low latency with reasonable availability. The term “partial” signifies that you are not guaranteed consistency of writes by these configurations — at best they guarantee a form of eventual consistency of final writes, but readers may well read stale data along the way. Lots of people are deploying these configurations in the field, but there’s little information on how often the approach messes up, and how badly.

Jumping off from earlier theoretical work on probabilistic quorum systems, Peter and Shivaram answered two natural questions about how these systems should perform in current practice:

  1. How many versions ago?  On expectation, if you do a read in a partial-quorum KVS, how many versions behind are you? Peter and Shivaram answer this one definitively, via a closed-form mathematical analysis.
  2. How stale on the (wall-)clock?  On expectation, if you do a read in a partial-quorum KVS, how out-of-date will your version be in terms of wall-clock time? Answering this one requires modeling a read/write workload in wall-clock time, as well as system parameters like replica propagation (“anti-entropy”). Peter and Shivaram address this with a Monte Carlo model, and run the model with parameters grounded in real-world performance numbers generously provided by two of our most excellent colleagues: Alex Feinberg at LinkedIn and Coda Hale at Yammer (both of whom also guest-lectured in my Programming the Cloud course last fall.)  Peter and Shivaram validated their models in practice using Cassandra, a widely-used KVS.

On the whole, PBS shows that being sloppy about consistency doesn’t bite you often or badly — especially if you’re in a single datacenter and you use SSDs. But things get more complex with magnetic disks, garbage collection delays (grr), and wide-area replication.

Interested in more detail?  You can check out two things: