% The Flaming Right by paul murphy

Answering Briggs

Although the ideas presented in the opening chapters have been developed by hundreds of very smart people over their three thousand year history they are unnecessary to science, mathematics, and the practical application of probability theory.

Instead, we need one assumption, one axiom, and a simplification:

The assumption is that the observable universe is made up of events that either happen (are real) or don't happen.

The axiom is that an event happens if and only if all the events that must happen for it to happen are real.

In other words: P(E|ei)=1 if and only if P(ei)=1 for all i.

The simplification is the adoption of traditonal probability notation - so p(E)=1 means E happens; p(E)=0 means it does not; and estimates in 0 < P(E) < 1 are measures of our knowledge about p(ei).

Note that we could, albeit with rather more difficulty, use I95 notation and claim Woodstock for real events, Miami for the negation, and some place near Richmond for coin flips.

At first glance this may seem to require time with each event characterized by one or more probability estimates ranging from zero to one during the period before the yes/no information is known, a transition state during which it happens or doesn't, and a later period during which the outcome is known.

However, if event E is fully determined by some set of events ei, then knowledge of all ei is equivalent to knowledge of all (ei AND E).

In other words knowledge of all ei means we don't have to wait to find out about E, and because ei is just a set of events the same logic applies to every event in it.

What this means is that the first event to happen determines all other outcomes and an outside observer with perfect oversight would not need to invent or perceive time to see everything.

Notice, however, that while there is no need to inject time into the set of events under observation, the human observer has to distinguish p(ei) from P(E) -meaning that observation requires an ordered sequence including a before and after.

Notice further that events happen or do not happen and may therefore seem to be quantized but that this is so only in appearance because it's the observer who decides what constitutes an observable event.

Imagine a single photon as a flock of birds keeping perfect formation as they fly a tight spiral around a single line, and you can imagine events happening at Planck scale but you can just as reasonably imagine events as having human, or even stellar, scale - there isn't any difference in the math.

The reason this works is that we're applying a purely binary model to the behavior of interest - did she smile, did the coin come up heads? - and not to either the cause of the event or the nature of the actors.

Notice too that Wheeler's multiverse hypothesis requires at least one event to both happen and not happen and that's not possible -and arguing indeterminancy on an external time scale allows Wheeler, but is recursive - so Penrose/Hawking variations really just sidestep the stopping rule (i.e. the first event) and so amount to little more than turtles all the way down.

More specifically, this interpretation means that causality in general, and statistically inferred causality in particular, is chimerical. Thus P(E|ei)=1 does not mean that the P(ei) cause E, it only means that the events in ei are the necessary and sufficient conditions for E.

In the real world, however, we almost never have enough information to predict anything with certainty, and so use observed (whether conscious and recorded or not) frequencies to guide our guesses -and that usually works. Consider, in this context a coin flip: in reality the outcome is fully determined by the forces acting on the coin, but in practice the difficulties inherent in measuring or controlling those forces so outweigh the value of the prediction that we usually just model the process using p(heads)=0.5 as an estimate - and this estimate, which actually means that we have no information about the outcome, is usually good enough for use in practical calculations involving bar bets and other real world matters.

We don't have perfect knowledge or anything even close to it, so while we can improve our estimate of the probability that E is real (happens) by improving our knowledge of the events ei, and so of guesses about their probabilities, we generally don't know enough about the events in i even to list the more proximate ones, never mind predict their probabilities with certainty. As a result we generally can't know whether E happens or not until after it does or doesn't.

But that's just us responding according to our limitations: an intelligence with perfect knowledge of all the events needed for a decision on action or inaction will know whether I'm going to eat, in the next ten minutes, the apple fritter sitting on the corner of my desk here - meaning that in a fully deterministic universe, my illusory free will is indistinguishable from the real thing, because I couldn't be sure until just .. chomp.

So, in summary, one assumption and one axiom get us:

  1. free will in a fully deterministic universe;

  2. an understanding of P(E) as a function (and the numerical value of the estimate as a measure) of the information we have about the p(ei);

  3. a bunch of interesting derivations and/or research directions:


Paul Murphy, a Canadian, wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.