[Addendum: and since I posted this, I’ve discovered that electric outboard motors are a thing now – so a (mechanical) outboard motor doesn’t even need an internal combustion engine. Who knew? ]

The idea that a probability distribution could be defined for such a heterogeneous “event” would seem to be the heights of absurdity. It makes the perennial question “how long is a piece of string?” seem cast-iron nailed-down rigorous, by comparison.

Well, if we classify it as an “inboard motor”, we have to ask if the likes of paddle-wheels, hydrojets, etc count as an “inboard motor” as well – so we’re in an even worse situation.

It seems to me that if Dembski wishes to argue for design over evolution then both I(E) and K(E) need to be defined in terms of evolution.

I’m not sure how either could sensibly be done, though I guess K(E) is not meant to be calculated. Unfortunately - as we’ve seen using | D | as an alternative tends to make I(E) even harder to calculate.

So it looks like ASC doesn’t really escape the problems of the original CSI - it’s only workable in trivial cases. And unlike the original CSI the justification for inferring design seems very unclear.

@Paul_King Matthew Pocock on FaceBook asked the same thing, so I’m taking a harder look at this.

|D | ≥K (E )

Is an inequality in KI theory, meaning that in practice we never know if there might be better compression (I think you know this already). K(E) can only be approximated by |D|, which is where Dembski gets

SC (E ) ≥ I (E ) - |D |

So I think you are saying the I(E) term just isn’t necessary? Or do you mean something else?

The crucial part of this formula would appear to be Shannon Information (Kolmogorov Information & |D| being semi-interchangable, and only calculating how much we need to diminish Shannon Information to get to SC).

I would therefore suggest that we concentrate on Shannon Information.

Dan has previously stated that Shannon Information requires that the probability distribution of the event be known. @Giltil: do you accept this? (If you wish to dispute this, then please state your reasons).

@Giltil has previously stated that “information storage and processing systems within cells” are among the “scientific discoveries [that] have undermined the materialist edifice and given new vigour to the thesis of a Creator God”. This would therefore seem to be a real-world example of this framework that ID has already claimed to have addressed.

Where have Dembski and/or Meyer estimated the probability distribution of the event of “cells”, and from this calculated their Shannon Information? Please provide a citation.

Failure to provide a citation for this, or a similar real-world-example, will be taken as an admission by @Giltil that Dembski’s and Meyer’s claims are nonsense and empty rhetoric.

I would like to see an explanation of how the Shannon information can sensibly be calculated, taking evolution into account - and especially when we have descriptions like “outboard motor”. An artificial example like the one we’ve been given tells us nothing about real application of the idea, nor does it give us any reason to think that it’s an advance over a simpler probabilistic approach.

That too. I want to make sure I have an example that fits Dembski’s definitions, and make sure those definitions are pinned down. I think it’s pretty solid in that respect - stay tuned.

I would note that it’s been a week since this comment, and @Giltil has failed to respond to any query about the real-world applicability of Dembski’s formula, particularly with reference to evolution. His only response has been to request clarification of the pseudocode in your reworked example.

Would it be too soon to take @Giltil’s silence as an admission that Dembski’s formula has no real world applicability? That it is simply a ‘toy’ that only can be made to work for heavily simplified examples, like his own and your reworking?

It seems like a waste of your time and assistance for Dembski’s pseudoscience. My point is that the calculations are just cover for the untested hypothesis that is misrepresented as fact.

Behe’s “Irreducible Complexity” works the same way. The hypothesis is that structures that fit the definition of IC can’t evolve, which, of course, Behe never bothers with.

As Paul King and others here have noted, you need to show that a large description of (phenotype? genotype?) combined with a small program length for producing it, is somehow effectively-impossible to achieve by normal evolutionary processes. Dembski and Ewert (and their colleagues) have never addressed this at all. So even if this were a sign-of-design, would it be a sign-of-design-and-or-natural-selection? The world wonders.

Both events match a specification, namely pi. But the first one exhibits much more Shannon information that the second. Therefore the first exhibits more specified complexity.

It seems to me each of the symbols in the equation are basically compressed information.

Circumference entails the definition of what a circumference is, and the circumferences of what? Division requires a definition. Diameter, like circumference, implies a question about what we are taking the diameter? It seems to me C/d actually implies much more information than those first 70 or whatever decimals of the value of pi, in the right context. The information in the expression is just highly compressed, and really only comes about through the context of (for example) this discussion. It would be impossible to derive the information without context.

I think this also shows how information is basically always contextual.