Dembski: Building a Better Definition of Intelligent Design

Unless I am misunderstanding (very likely, this is quite a complicated discussion) I would disagree. I think @Giltil’s example provides us very crucial information regarding the applicability of Dembki’s formulas.

If @Gilitl is correct and pi exhibits a high degree of CSI, this leads to one of two conclusions.

  1. Pi was designed.

  2. CSI cannot be used to detect design.

It seems quite obvious, to me, that (1) is false. Therefore, (2) must be true.

Perhaps @Gilitl would care to argue that pi is designed, thereby rescuing Dembski’s argument.

5 Likes

I don’t think the argument is that “pi” is designed, only that the system is designed to output pi.

However. There are some serious issues with that.

First,S(E) seems to be defined relative to a particular chance hypothesis - so the most ASC could do is eliminate that chance hypothesis (In fact the supposed “Shannon information” seems to be the old CSI figure again).

Eliminating a chance hypothesis however, is not in itself enough to conclude design. So I guess ASC has to go back to the exhaustive elimination of all possible “chance” scenarios (including those based on regularities which are not themselves designed).

Worse, since we’re dealing with hindsight we really want to be using the probability of a sequence that we’d accept as having a valid specification. I can’t see that the use of K(E) gives us that at all. And the substitution of |D| for **K(E)**seems completely senseless - especially as the calculation seems to be quite arbitrary,

So basically this looks like the old CSI with weird kludges that don’t help at all.

2 Likes

The “fine tuning” people might claim that pi is designed.

2 Likes

What “system” would this be? We only have the hypothetical “Peter”'s word for it that his machine even exists. And if we disbelieve his claim that this purported machine generates random digits (+ comma), then why should we accept that his machine even exists?

Pi exists independent of this purported “system”, so the existence of a system-designed-to-output-pi would seem to be an unnecessary hypothesis.

That seems a pointless quibble. If the machine doesn’t exist then it can’t actually operate as Peter says. And it would be a rather silly thing to lie about (although the example is rather silly anyway).

So the “Designer” could have designed Pi to be 3.24159…, if they’d wanted it to be?

I think this example is not carefully defined, and discussing it is likely to go off the rails as a result. What is actually being tested here is if Peter’s machines performs as described. Given the sequence it produced, this is unlikely to be true. This means the assumption needed to calculate SI (all events are equally likely) is not valid.

There is a good reason I gave a careful defined example. Please consider working with that instead.

No. You are confusing a description with a representation. The five paintings are different artful representations of the same thing ie., the Eiffel Tower, not descriptions. But the 5 paintings share the same description, ie., Eiffel Tower.

No, I’m not. You are confusing a description with a specification, and confusing a depiction with the object depicted.

None of those five paintings have the description “Eiffel Tower” - they only have the description “Picture of Eiffel Tower”. Since that description applies to all of them, it is not a specification of any of them, just like your description “pi” applies to both “3.1415926” and “c/d” and is not a specification of either of them.

If you stopped trying to use analogies and stuck to describing reality, you might fare better.

3 Likes

@roy @Giltil
There is a bit to unpack here. Kolmogorov Information describes “lossless” compression, meaning you can recover all the original details from the compressed version.

The specification .“Paint a picture of the Eiffel Tower” could be associated with any paintings that includes the Eiffel Tower.
“Paint a picture of the Eiffel Tower as seen at sunset on a clear day from L’île aux Cygnes in the style of Van Gogh” requires more KI, and would not match nearly so many paintings. A specification to exactly reproduce a specific painting would not be compressible at all (maximum KI). The only way to send this message would be to make an exact replicate and ship it to the receiver.

Unstated here, but necessary to put this into practice, is the background knowledge assumed. If the artist given the specification doesn’t know what the Eiffel Tower looks like, or can’t go to Paris to see it for themselves, then nothing but a complete specification of every detail will do. OTOH, if the artist can be assumed to have access to detailed descriptions and photos, then a very general description is enough.

Now we really need to ask Dembski what he means be a description of the bacterial flagella as “an outboard motor”. @Tim already made this point above. A human might have access to the necessary background information that (with sufficient ingenuity) would allow them to build a working outboard motor …

BUT bacteria do not have the background information that humans have. Specifying “build an outboard motor” to a bacteria is not going to generate a flagellum. Only the full description of the DNA sequences needed to produce the necessary proteins will do, and the only background information are the laws of chemistry. (And this is why KI doesn’t have any application in biology.)

There is a serious disconnect between what Dembski writes and what can actually exist. Human communications have no bearing on bacterial function, yet he claims that “an outboard motor” is telling us something meaningful about the bacterial flagellum. It doesn’t.

2 Likes

@Giltil
I should correct my earlier statement that Shannon Information cannot be calculated for a single event, because there are examples where it is used in that context. This should still require the SI for the entire distribution to make any useful interpretation.

There are potential applications using the point-probability of events from the same distribution, or comparing the probability of a single event for two different distributions. These do not require taking the log of probability to get SI. For that matter Dembski doesn’t need SI either; he could use the “algorithmic probability” in place of KI, and define SC as a probability**.

** This requires a joint probability distribution linking E and D, which is simple if we know the distributions of E and D (because the Event and Description are supposed to be independent). It is also nonsensical, because it implies the events have random descriptions and vice-versa.

I think I’ve figured out the reason for subtracting the Kolmogorov information.

Dembski is attempting to estimate (or set an upper bound on) the effect of multiplying the probability of the event by the number of different events that could be produced by algorithms of equal or lesser length. Which is an attempt to compensate for the “Texas sharpshooter” effect of viewing the event in hindsight.

It’s still rather odd, and at best a heuristic rather than any sort of rigorous method but it’s not completely senseless. It’s far from the biggest problem with Dembski’s method.

1 Like

Okay. So let me reformulate my point then:
No. You are confusing a description with a representation. The five paintings are different artful representations of the same thing ie., the Eiffel Tower, not descriptions. But the 5 paintings share the same description, ie., « picture of Eiffel Tower ».

You’re wrong here, for specifications can come with varying degree of details. The description « picture of Eiffel Tower » is a specification, as is the one offered by Dan, ie., « a picture of the Eiffel Tower as seen at sunset on a clear day from L’île aux Cygnes in the style of Van Goth ». The latter is more detailed than the first, but both are still specifications. To take a biological example, « A molecule that binds ATP » is a specification that applies to many different molecules.

1 Like

I think @Joe_Felsenstein will agree with your interpretation of what Dembski is trying to do. I think it’s still wrong, and it takes a while to explain why.

Which is a pretty good description of Specified Complexity as defined in Dembski (2005), and the subject of my blog post.
In that version P[E] is multiplied \phi(T), which is a function of KI.

I think this new version of SC is probably the same as the old to a constant multiplier, but I haven’t actually tried to work it out yet. Fom Dembski’s 2005 paper …

\chi = -log( M \times N \times \phi(T) \times P[T|H])

taking the log thru …

\chi = -log( M \times N) - log( \phi(T) ) - log(P[T|H])

and substituting the new terminology (I’m not sure about this step) …

\chi = -log( M \times N) - K(D) + I(E)

and rearranging gives …

\chi = -log( M \times N) + I(E) - K(D)

where MxN is the “multiplicative resources”, a constant multiplier, or additive constant after taking the log.

I would be curious to know if anyone in the ID crew agree with my math and substitutions here?
Maybe @colewd can ask someone?

.…
My objection in the blog post was that

M \times N \times \phi(T) \times P[T|H]

is not a probability, but rather the expected value of a binomial, which can be greater than one. At this point Dembski no longer has a probability, and his “information measure” is undefined for any event which is not specifically complex, defeating the whole purpose of using CSI to detect complexity.

I later modified my interpretation in a discussion at Panda’s Thumb; \phi(T) cannot be used as a multiplier of probability because it does not represent binomial trials. Dembski doesn’t even have an expected value, only an arbitrary number with no meaning.
[Edit: added link to PT]

This number with no meaning is the same place we end up when trying to take the difference of SI and KI, in Dembski new scheme.

This is a lot of material written off the top of my head, and so a good time for me to pause and think about it.

The example illustrates how specified complexity can be used to debunk a false claim, here Peter’s claim that his machine always work as a random digit/comma generator. What is silly here?

So “A molecule that binds ATP” specifies a class of molecules, not a particular molecule. If “A molecule that binds ATP” does not describe a specific molecule, it cannot be used to determine the SC of a specific molecule.

Similarly, the description “pi” applies to many different strings, and cannot be used to calculate the SC of any one of them; and “outboard motor” is far too generic to be used to calculate the SC or the information content of a particular flagellum or the proteins and genes associated with it. The equation you used above is no longer usable since it has become

SC(Ep) = I (Ep)– K(Ep) ≥ I(Ep)– |D(p1+p2+p3+p4+…pn)|

But the Kolmogorov information of a specific event can be higher than the number of bits in a generic description that applies to lots of events. If it’s possible that K(Ep) > |D(lots of p)|, you end up with

SC(Ep) = I (Ep)– K(Ep) ??? I(Ep) – |D(Σp)|

and every calculation you have done falls apart, as does any argument based on them.

Dembski’s use of “outboard motor” fails for exactly the same reason.

I think we’re done here.

4 Likes

P.S. If you doubt the above, try using the description “something”, which in universally applicable and so contains no information at all, so |Dp| is zero[1]. Then

I (Ep)– K(Ep) ≥ I(Ep)

which by subtracting I(Ep) leads to

K(Ep) ≤ 0

which is clearly wrong, and would lead to

SC(Ep) I(Ep)

which renders SC pointless since you might as well just use Shannon information.

The consequence of allowing a non-specific ‘specification’ is the reduction of your argument to an incorrect irrelevance.


  1. Ok, nitpickers, it could be 1 bit that distinguishes it from “nothing”. This doesn’t change the overall conclusion. ↩︎

3 Likes

Multiplying a probability is fine when you have mutually exclusive, equiprobable events from the same distribution. E.g. the chance of an ordinary die rolling a 1, 2, 3 or 4 is 4 x 1/6.

Now in this case we would be dealing with events from the same distribution and we can assume mutual exclusivity - as long as we remember that ASC is relative to a single chance hypothesis - which makes it pretty useless as an indicator of design, but there you are.

Equiprobability is more dubious as an assumption but if the actual event is more likely than the average then there’s no problem, if it is significantly under the average then it’s dodgier but that should be unusual (at least if that hypothesis is correct - and if it isn’t then eliminating it does no harm).

So multiplying isn’t completely nuts - I’m more concerned that the factors chosen are rather arbitrary, and even more about the problem of practicality. In the end it seems to me that the whole thing is just an attempt to justify Dembski’s use of the length of the description in words - and `I think it fails there.

1 Like

The artificiality of it and the fact that it shows no value at all in ASC. What’s the point of it?

2 Likes

The moment we suspect the machine is not generating the sequence in the manner described (a discrete uniform distribution), the value of I(E) is also suspect.
In the extreme case, if the machine always generates the same sequence, then I(E)=0.

The type of event that SC is supposed to detect implies the distribution is not discrete uniform, invalidating the assumption needed to calculate SC.

The solution is to generate many sequences from the machine and apply standard statistical methods.

1 Like