Skip navigation

In today’s episode of the Twilight Zone, a young William Shatner stumbles into a time machine and travels back into the past. Cornered in a dark alley, he is threatened by a teenage hooligan waving a loaded pistol. A tussle ensues, and in trying to wrest the gun from his assailant, Shatner fires, killing him dead. Examining the contents of the dead youth’s wallet, Bill comes to a shocking conclusion: he has just killed his own grandfather. Tight focus: Shatner howling soundlessly as he stares at his own hand flickering in and out of view.

Shatner? Or Not(Shatner)? Having now changed history, he could not have been born, meaning he could not have traveled back in time and changed history, meaning he was indeed born, meaning…?

You see where this goes.  It’s the old grandfather paradox, a hoary chestnut of SciFi and AI.  Personally I side with Captain Kirk: I don’t like mysteries. They give me a bellyache. But whether or not you think a discussion of “p if Not(p)” is news that’s fit to print, it is something to avoid in your software.  This is particularly tricky in distributed programming, where multiple machines have different clock settings, and those clocks may even turn backward on occasion. The theory of Distributed Systems is built on the notion of Causality, which enables programmers and programs to avoid doing unusual things like executing instructions in orders that could not have been specified by the program that generated them. Causality is established by distributed clock protocols. These protocols are often used to enforce causal orderings–i.e. to make machines wait for messages. And waiting for messages, as we know, is bad.

So I’m here to tell you today that Causality is overrated, and we can often skip the wait. To hell with distributed clocks: time travel can be fine.  In many cases it’s even fine to change history. Here’s the thing: Casuality is Required Only to control Non-monotonicity. I call this the CRON principle.

Let’s agree for purposes of discussion that p and Not(p) is not meaningful, and our Twilight Zone episode the usual logical nonsense dressed up with campy schmacting. Suppose we change the plot so that Shatner goes back in time and does something positive, in the logical sense of monotonicity: generating additional information without negating any information. For example, suppose he falls in love with a young woman in the past, they marry, and have a child. Then one day, to his horror, he realizes that his wife is his own grandmother! Distasteful, I agree, but logically monotonic! If you accidentally mate with your ancestors, it doesn’t matter (from a logical point of view) whether you use a time machine to do it. This is one direction of CRON: Purely monotonic programs will have consistent outcomes even in the face of “non-causal” orderings.

That is a big deal. It means that there is a large class of distributed programs where causal orderings—a foundational topic in distributed systems theory—are not required. Note that monotonic fixpoint logic (if you allow an ordered domain like the integers) can express all polynomial-time programs. And after all, who would run an exponential program in a distributed system?  So you might justifiably argue that distributed systems have been “wasting time” all these years for no good reason.

But let’s be real: not all polynomial-time programs are equally good, and it’s handy for programmers to be able to express non-monotonic things like … say … updating a variable (or shooting somebody evil). So when can we allow that to happen without enforcing the forward march of time?

Answer: when the consequences of the non-monotonic expression do not entail its antecedents. I.e. when the ability to shoot does not depend on the fate of the victim. Now in the physical world, we might worry a la chaos theory (or Edith Keeler) that everything is sensitive to the fate of everything else. In computer programs, though, we can plausibly guarantee the property we want here in many cases. We might be able to prove for a given program that, effectively, victims will never be ancestors of any shooter. When we can guarantee that–statically or dynamically–we can admit the non-monotonicity into the program without requiring an implementation of distributed clocks and waiting.

So when do we need distributed clocks?  Only when–exactly when–the grandfather paradox would arise in their absence. Causality is Required Only for controlling cycles in Non-monotonicity.

(This discussion comes out of my PODS talk on the Declarative Imperative from last June.  Bill Marczak, Peter Alvaro and I are in the midst of writing up a formal proof. Input welcome!)

Advertisements

2 Comments

  1. The “your own grandfather” analogy doesn’t seem like a useful one. The outcome is consistent at the individual level (“you” are still born) but not at the genetic level. Your genes are a complex shuffle of your parents’ genes. That your grandfather’s genes (yours) could be shuffled twice and result in an identical set of genes (yours) seems like a pretty remote possibility?

    • Paul, you’ve confused me. Or perhaps it’s mutual.

      Anyway, when you speak of still being born, maybe you mean: “after” Shatner kills his grandfather he wasn’t born, and hence his grandfather lived, and hence he was once again fated to be born. So you’re discussing the end of that sentence and suggesting he will next be born again through a different random trial of genetics? That sentence is an operational interpretation that doesn’t really make a lot of sense to me — it’s just a thought process you can go through to convince yourself that this is a paradox. From a logic point of view, if you say “p and not(p)”, it’s not as if the truth of p flickers back and forth infinitely. It’s just an inconsistent assertion.

      But hey, let’s roll with this — suppose killing your grandfather results in a second roll of the genetic dice as you suggest. So as an academic matter, we might consider “p(X) and not(p(X))”, where X ranges over some domain, to be some kind of infinite random sampling of the domain. Maybe it’s like a Schrodinger’s logic program, where it stops making sense the minute you try to assign a model to it. I’m not sure what you do with that, but I gotta think somebody has taken the idea for a spin at some point…


2 Trackbacks/Pingbacks

  1. […] This post was mentioned on Twitter by Hinnerk Haardt, adrian cockcroft. adrian cockcroft said: The CRON Principle: To Hell with Distributed Clocks http://is.gd/iaRaP causality is overrated? […]

  2. By Quora on 11 Aug 2011 at 11:08 pm

    What are some active uses of weakly consistent, read-any/write-any distributed systems like Bayou?…

    Theoretically speaking (ideal use cases for weakly consistent systems): Eventual consistency allows for high availability, even if majority of nodes are unable to communicate with each other e.g., disconnected operations. It also allows low latency be …

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: