Dembski: Building a Better Definition of Intelligent Design

I’m getting rather tired of unrealistic “examples” of Specified Complexity that lack any real-world significance.

So I thought I’d pick a real-world example that the ID community seems unable to shut up about: Mount Rushmore.

That Mt Rushmore is designed would seem to be intuitively obvious – but how would you go about rigrously demonstrating this using Dembski’s formula?

The first problem would be, what would the “description” be?

One might argue that the following would suffice:

Four human faces on a mountainside.

But such an argument would be problematical, as that description would not be sufficiently rigorous to define the demarcation line between a genuinely-human face and mimetoliths (vaguely-face-like rock features) like these:

image

image

Even if we could come up with a rigorous description, we then would have to come up with a method of estimating the probability that such a feature would come into existence through natural processes, like erosion. For any, reasonably complex, natural process this estimation would be prohibitively difficult.

It would therefore seem to be that Dembski’s formula cannot validate “design” even when design is intuitively obvious. This renders Dembski’s formula, and method useless even on his own terms.

This ‘method’ would seem not to work for any real-world example, but only for carefully cherry-picked ‘toy’ examples that lack any real-world implications.

The purpose of his method then is not to rigorously demarcate design from non-design, but to give a plausible-sounding (but ultimately vacuous) argument to placate the gullible. It is simply apologetics at its very worst.

This is a far-too common result of apologetics efforts – when the real objective is to win an argument (e.g. the argument that God exists), this comes at the expense of any real contribution to human knowledge (be that knowledge scientific, mathematical or information-theoretical).

2 Likes

A freely available version of the Keefe and Szostak paper may be found here

I’d add that the SC is based on the probability of a single trial. The number of trials available must be taken into consideration. Keefe and Szostak do not find the probability daunting.

But again, this only shows SC being used in a trivial case, and not even coming to a conclusion of design. Where are the real applications? Where are the non-trivial examples? Is SC anything more than a waste of time and effort?

2 Likes

I finally had the chance to read the preprint mentioned in this thread:

And wouldn’t you know it, they introduce a metric they call “high-order entropy” which is the difference of Shannon entropy and Kolmogorov complexity (as estimated via compression and normalized by string length). They demonstrate that this measure is useful for detecting phase transitions between populations with numerous unique and mostly random programs and populations that are dominated by many copies/variants of a self-replicating program.

(And while we’re at it, we can tie in another recent thread and note that this is a scenario Assembly Theory is intended to address/detect: when the copy number of an entity far exceeds what is expected from combinatorics and input abundance alone, suggesting reproduction and/or selection is involved.)

2 Likes

I invite you to read chapter 6.8 of the second edition of the Design Inference for it gives several examples of how to calculate actual values of SC in the case of real-word events such as 1) the giant smiley face that appears every fall in Polk County, Oregon 2) the humanoid face that can be seen on a photo of Mars transmitted in 1976 by the Votager 1 spacecraft 3) Mount Rushmore 4) The unusual interstellar object dubbed « Oumuamua » 5) a case of lottery fraud played out in 2006 in the province of Ontario 6) Bible codes.

In chapter 6 as well as in appendix B, yes, absolutely!

If you had read the second edition of the Design Inference, you probably wouldn’t say that.

Don’t understand your point here. Can you elaborate ?

Are you serious?

Not sure to see your point. What do you mean by detecting design by treating it as a positive hypothesis ? Could you give an example ?

There is a misconception here. Dembski’s goal is not to provide a « mathematical proof » of design, not at all. Please consider that he titled his book « the design inference », not « a mathematical proof of design ».

I will wager - before reading that article - they have some sort of joint distribution defined to link the two.

ETA: or even better, the SAME distribution.

Where Dembski has
SC(E) = I(E) - K(D)
with nothing linking E and D, and no joint distribribution, except (possibly) they are independent.

BA y Arcas et al (2024) has
Higher-Order-Entropy(E) = I(E) - K(E)
which might make sense. I need to give a paper a more careful read before can say more.

The second edition is not the original - and I’ve yet to see you produce anything worthwhile.

It’s the “Texas Sharpshooter” problem - the reason why Dembski is introducing a multiplier as I’ve been discussing.

Absolutely. Considering design as a positive hypothesis is a very important part of identifying design.

To take an example, for Mount Rushmore we’d start with the idea that the faces are sculpted and look for evidence of that - and we’d find it - without having to bother with directly eliminating alternative hypotheses. Dembski just leaves design as an unexamined what’s-left-over after eliminating everything else (he hopes). But that’s a really unsatisfactory way of doing it.

No, I clearly remember Dembski talking of it as a mathematical proof of design.

1 Like

@AndyWalsh Can you copy (go to edit and copy there) the text of you post and use it to start a new thread?

I can do it if you are copy, but then it will appear as my post (I will credit you).

This seems a good departure point from the Dembski post-mortem to a current topic in OoL and Information Theory.

I looked up this “example” – and ye gods it was ludicrous.

  1. Dembski admits up-front that he has no “precise evaluation of the probability”, only an intuitive “general idea of what [he thinks] the probabilities must be” – “a rough approximation or back-of-the-envelope calculation”. This does not instill confidence in Dembski’s rigor, right out of the gate.

  2. He offers no formal definition of the “identifiable features” he wishes to count. This means that his entire methodology relies on intuition as to what does or doesn’t count, and so is hopelessly informal.

  3. Rather than specifying the number of features needed to validate a human face a priori, he simply lists those that exist in each example ex post – rendering the whole thing akin to a Texas Sharpshooter fallacy.

  4. He plucks the assumption that the probability of each feature would be "one in a thousand* out of thin air – and never even bothers to specify whether this totally made up and credibility-free number is per square km, per mountain, per mountain range, per continent, for the whole planet or what. – this renders his probability meaningless analytically, even if we accept it as ‘true’!

  5. He implicitly assumes that the probability of each feature would be independent – whereas it is more liekly that where there’s one random feature, there will be others nearby, formed by the same natural process – e.g. erosion by the same stream or waterfall.

Many moons ago, I tutored introductory statistics – if one of my students turned in something like this horrendous ‘example’ for marking, I’d give it a failing grade.

It also brings back to mind Wolpert’s review of one of Dembski’s later books:

Topics addressed in the field of philosophy fall into two categories. In the first category are topics that have not (yet) been subjected to a broad yet rigorous mathematical formalization. Accordingly, they are “just word arguments”, and have not benefitted from the clarity and power that mathematical precision affords. Examples of topics in this first category are philosophies of art, music, and literature, as well as much of ethics, and other parts of the humanities.

By contrast, topics in the second category have been formalized, in a form generally perceived as capturing much of their essence. Such topics include much of what several centuries ago was called “natural philosophy” and is now collectively known as “science”. This category also includes those issues in epistemology that were addressed by Gödel’s incompleteness theorem and related uncomputability results.

Like monographs on any philosophical topic in the first category, Dembski’s is written in jello. There simply is not enough that is firm in his text, not sufficient precision of formulation, to allow one to declare unambiguously ‘right’ or ‘wrong’ when reading through the argument. All one can do is squint, furrow one’s brows, and then shrug.

Please count me among those ‘shrugging’ at Dembski’s hopeless lack of rigor.

4 Likes

If Higher-Order-Entropy were to be applied to real world objects, it would have the same problems defining probability as Dembski encounters with SC. It works for this study because it is applied to a well-defined population of objects. There is no arbitrary description of the objects, Kolmogorov Information of the objects are approximated using a standard text compression algorithm.

One more thing … I called it! :wink:

A quick skim of that chapter and that appendix would seem to indicate the contrary conclusion of no, absolutely not!

So @Giltil, it would seem to be necessary for you specify exactly which subsection he discusses it in (the appendix has 8 subsections), exactly which “real-world evolutionary hypotheses” he is dealing with, and exactly how he “go[es] about estimating probability distributions” of them (noting that simply assuming a probability (i) does not count as “estimating”, and (ii) is analytically absolutely worthless).

Addendum:

The closest to this I could find to this was in Appendix b.6, where Dembski discusses the bacterial flagellum, but that discussion contains the following ludicrous claim:

We start with a bacterium that has no flagellum, no genes coding for proteins in the flagellum, and no genes homologous to genes coding for proteins in the flagellum.

This is clearly not an evolutionary hypothesis.

It also contains no estimation of the probability distribution involved.

I discovered elsewhere that he has disavowed doing the leg-work himself:

The design inference is a method. Whether and to what degree this method applies to biological systems depends on biologists successfully using it to demonstrate design. One of us is a pure mathematician. The other is a computer scientist. Neither of us is a biologist. Our task therefore is not so much to apply this method to biology as to hand it off to biologists, who can then apply it properly to their discipline, nailing down the design inference for particular biological systems.

Given that Dembski has neither qualifications, nor any apparent track record of peer-reviewed research, in Pure Mathematics, I find his claim to be a “pure mathematician” somewhat exaggerated.

I am also curious to know what biologists he thinks he will “hand it off” to – as his claims appear to have excited little interest outside the ID echo chamber, and that echo chamber contains very few working biologists. It is worth noting that the weird grab-bag of voices in the “Advanced Praise” section at the start of this book contains not a single biologist (the closest is a retired Professor of Biomedical Engineering – hardly helpful for Dembski’s project).

I am also highly skeptical that anybody will be able to “apply it properly” to the field of biology – a field where the probability distributions are simply too uncertain to allow meaningful estimation.

2 Likes

If Dembski were to take the usual approach of graduate students in statistics, he would write a computer simulation to demonstrate the method and show the statistical properties (Type I and II error) in a known setting. This is structurally similar to what BA y Arcas et al (2024) does, but the emphasis there is on the numerical experiment, not on the properties of “Higher-Order-Entropy”.

@Giltil Calculation troubles aside, it seems there IS an application for the difference of SI and KI. I really was unaware of such a thing before Andy’s post. Dembski defines E and D to be independent, and applies it to a single event rather the state of an entire system. I still don’t see any way to get a useful interpretation out of SC. Independence implies there is no information linking E and D (in a statistical sense one does not predict the other), so Dembski has boxed himself out of any useful meaning for SC.

1 Like

That may well work in a reasonably abstract setting, where it is reasonable to postulate likewise abstract (and thus simulatable) constraints and rules. I’m less clear that it would work for more concrete evolutionary examples, ruled by a complex, fluid and not-fully-documented environment. Could BA y Arcas et al’s methodology be made to work for questions like:

  • Could the bacterial flagullum have evolved from the Type III secretory and transport system (as has been widely suggested)?

  • Could the malaria parasite have naturally developed resistance to drugs?

Such questions do not seem to be particularly amenable to simulation – hence my high level of skepticism.

1 Like

It could be applied to an entire population, or a “large” sample from population.
If you can get enough independent samples you could do a t-test on differences in Higher-Order-Entropy over time or between groups.

No and no. Higher-Order-Entropy (HE?) is a descriptive statistic (like a mean or median) of repetitive organization in the population. Arcas uses HE to describe the state of the system over time, demonstrating the change in state from random to organized. To do anything useful for the evolution of the bacterial flagellum would require data over the course of bacterial evolution. If we had that we wouldn’t be having this discussion. :wink:

If might be plausible to get data for the evolution of bacteria evolving drug resistance - something like Lenski’s experiments. I would not expect any major change-of-state that could be measured by HE, Bacteria are good at evolving resistance, adding one new resistance isn’t going to reorganize the genome very much.

Then it seems unlikely that BA y Arcas et al’s methodology would allow biologists to “apply [Dembski’s methodology] properly to their discipline, nailing down the design inference for particular biological systems” that (ID advocates claim) call into question biology’s evolvability – which was my original point.

2 Likes

Is it then fair to say that SC or any complexity argument like it, concerns OoL and not evolvablility once life is established by whatever means?

2 Likes

Looks like we’re not going to see any real evidence that Dembski’s method has any practical use.
Which, I suppose is to be expected, since it has none.

3 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.