Dembski and Swamidass: What is the Status of the Explanatory Filter?

But that isn’t what Dembski is including there. Chance is not a natural law in his formulation. Of note, he also means ontological chance, which is different than how we mean “random” in biology. That also is the side trail you intend rightly to avoid.

Why one would do this, I can’t say. We can’t speak to motivations very easily.

However, it does justify excluding any realistic model of random mutations (chance) + natural selection (natural process).

I suppose that in addition to normal randomish processes in nature, one might want a category of things that happen by “chance” so that Theistic Evolution could work by tweaking the “chance” events. But the Explanatory Filter did not attribute “chance” to leprechauns, but only concluded in favor of leprechauns once “chance” as well as “necessity” were eliminated. So that can’t be it.


Thanks for the reply, @pnelson. I don’t think you address my point completely, though. The original question:

By my way of looking at things, the agent “Paul Nelson” is short hand for “sentient being whose origins involve some combination of (4), (5), and/or (6)”. When you deliberately exclude “Paul Nelson” as you do, @pnelson, you render your entire argument quite meaningless.

Put another way - why exclude just “Paul Nelson”? Why not also exclude, oh, say, hydrophobic forces, covalent catalysis, neutral evolution, the strong nuclear force, or any other aspect of nature? Why limit the exclusion to just “Paul Nelson”? What is the logic behind your arbitrary decision?


Mozart did not write the Concerto for Flute, Harp, and Orchestra in C major, K. 299/297c.

Some combination of (4), (5), and/or (6) did. The apparent agent “Mozart” is shorthand for physics in various guises. We simply await the details, but “Mozart” is a placeholder for a more adequate, non-intentional theory.

That way lies insanity. My apologies if I do not follow you there.

7 posts were split to a new topic: Nelson: Parabolas and Methodological Naturalism (Again)

A post was merged into an existing topic: Nelson: Parabolas and Methodological Naturalism (Again)

Dembski has responded:

Unfortunately, Dembski talks about a rusting car instead of biology. I would still really like to see the EF applied to biology in a meaningful way.

Thanks. I will think about it and may respond, but I don’t personally want to muddy the water by saying more till I’ve thought about it.

1 Like

Looks like he’s trying to do damage control. I applied the explanatory filter to determine this.


Look at my description in the OP. It seems that Dembski agrees:

That seems to be precisely what he says in his response. Now look at the questions I asked.

It seems that he does not have an answer for the logical error I identified. It is notable, for this reason, that he does not link to this thread, even though he is responding to it.

1 Like

I agree, to insinuate that Mozart was not a sentient being (which is exactly what I was saying) is a bit of insanity. I’ll admit I do not understand in the slightest what you are getting at, @pnelson. We plainly have different ideas of what to means to be sentient.

1 Like

As we settle in for today’s Belmont Stakes, the question comes to my mind - chance, necessity, or design:

Hi @swamidass,

Notice that the EF considers Natural Process XOR Chance, but never Natural Processes AND Chance.

It seems to me that you’re not interpreting Dembski’s Explanatory Filter in the way it was intended. It seems to me that what Dembski had in mind was a single, non-decomposable phenomenon. Consider, for instance, a message in the sky, written in smoke and observed by a young woman: “Will you marry me?” The logic here is simple enough.

  1. Does any law of nature account for the message? No. Laws don’t generate semantics.
  2. Could chance account for the message? No. A “W”," maybe. A whole sentence? No way.
  3. Is the message a specified one? Because if it isn’t, then we might still rationally prefer chance. But it is specified, so chance is no longer a rational explanation. That leaves design as the only option: the message was left by a skywriter (presumably the woman’s boyfriend).

It would be very strange here if someone were to object that some combination of natural and chance processes might still have generated the message. This objection treats the message as if it were decomposable into parts that arose over different stages - a scenario that does not apply here.

So the question we need to answer is: are there any instances in nature of highly specified phenomena which are non-decomposable in the same fashion? I don’t think there are. But here’s my point: in principle, there could be. Take Douglas Axe’s claim that certain long-chain proteins are beyond the reach of chance: 10^-77 is the number that usually gets bandied around. It’s wrong, of course, as several authors have convincingly argued (including some readers of this post). But if (for argument’s sake) the number were correct, and if there were no “stepping stones” or viable structures of shorter length that could serve as a bridge, then the logic of the argument would apply.

Darwin himself acknowledged that his argument would be overthrown if it could be shown that there were any complex biological structures that could not have evolved by a series of steps. However, science has never uncovered any such structures, so Darwin’s naturalistic account of life looks quite safe.

If I were to criticize ID advocates for one thing, it would be this: a tendency to assimilate functional complexity to semantic complexity. The two are very different, despite both being specified. Functionality comes in grades. Meaning does not: a sentence either makes sense or it doesn’t.


Please show me quotes where he limits this to single non-decomposable phenomena.

Please show me how he demonstrated that biological origins, where he applies the filter, are single non-decomposable phenomenon. As I understand it, evolutionary science contests precisely this point, and would leave EF without any application within biology.


I agree with Josh, Evolutionary processes such as random mating, mutation, migration, natural selection, and genetic drift all go on every generation, just as Josh says, So having a method of detecting Design that only works when one processes occurs is useless.


… and, I should add, that restriction is not what Dembski did – he didn’t restrict his argument to “single, non-decomposable” processes. (He actually did not discuss evolutionary processes).


You want quotes? Happy to oblige. I have in front of me The Design of Life (Foundation for Thought and Ethics, Dallas, 2008), a text co-authored by William Dembski and Jonathan Wells. Only in chapter 7 do Dembski and Wells attempt to provide a mathematical argument for design. On page 196, they concede that it is impossible to estimate the probability that the bacterial flagellum evolved naturally, precisely because “Darwinists never identify detailed evolutionary pathways for such irreducibly complex biochemical systems.” In the absence of detailed pathways, “we are limited to more general probabilistic considerations” - in other words, listing all the hurdles that need to be overcome, for such a structure to come into being. However, that’s not a rigorously mathematical argument, as it is unquantifiable, and Dembski and Wells are well aware of this problem.

In paragraph 2 on p. 196, Dembski and Wells continue:

“In estimating probabilities for the evolution of biochemical systems by Darwinian processes, we therefore need to analyze structures even simpler than the flagellum. The place to look is the improbability of evolving individual proteins.

The authors then concede that this kind of research has yet to be done for the proteins that make up the bacterial flagellum or other biological machines, adding: “As a consequence, forming hard estimates for the improbability of evolving the flagellum and other irreducibly complex systems remains an open problem.

However, Dembski and Wells continue, “hard estimates for the improbability of evolving certain individual proteins are available. Proteins reside at just the right level of complexity and simplicity to determine, in at least some cases, their improbability of evolving by Darwinian processes.” They go on to add that what determines the evolvability of proteins is “not merely how sparsely proteins are distributed among all possible amino acid sequences, but also, and more importantly, to what degree proteins of one function are isolated from proteins of other functions.” (p. 196)

On page 199, Dembski and Wells resume their mathematical argument:

“Does any experimental evidence confirm that larger proteins may be unevolvable? Such evidence exists. Research by molecular biologist Douglas Axe that a domain of circa 150 residues of the protein TEM-1 beta-lactamase is unevolvable by Darwinian processes.”

They then describe how Axe arrived at a figure of 1 in 10^64, which identifies “the improbability of evolving by Darwinian processes a working protein domain with the same pattern of hydrophobic interactions characteristic of his beta-lactamase domain.” They add: “Thus, for all practical purposes, there’s no way this domain could have evolved by Darwinian processes.” (p. 200)

More importantly, om page 203, Dembski and Wells concede that Behe’s argument for the design of irreducibly complex systems contained “a loophole,” which Axe’s argument closes: namely, that “for most of Behe’s systems, one can identify subsystems that might perform a function of their own.” For that very reason, “Darwinists point to these subsystems as possible evolutionary precursors to the systems in which they are embedded (e.g., Darwinists point to the type-three secretory system as a precursor to the flagellum…).” By contrast, “the domains studied by Axe have an integrity that admits of no functional subdivisions and thus leaves Darwinian evolution with no plausible precursors.” (p. 203)

From the foregoing, I think it is clear that Dembski and Wells are conceding that any rigorously mathematical argument for design has to be based on what I describe as “single non-decomposable phenomena” - for instance, the function of a single protein. That’s why the complaint that Dembski’s Explanatory Filter fails to consider “Natural Processes AND Chance” is beside the point, in my opinion. That would be an entirely valid point if one were considering the bacterial flagellum, but Dembski and Wells are talking about a system (a long protein) which they believe (on the basis of Axe’s figures) could not possibly have evolved in a sequence of steps.

Of course, it’s now generally agreed that Axe’s figures are wrong by many orders of magnitude. But if his numbers were right, Dembski and Wells would have an argument.

The best critiques of ID, in my opinion, are not philosophical ones but mathematical ones, relating to empirical data such as proteins.


@vjtorley no place here have you discussed the Explanatory Filter, which was his book in 1999. It seems the topic has drifted to CSI, which is a different concept entirely. So it does not really address my question to find quotes where Dembski limits the Explanatory Filter to “ single non-decomposable phenomena ”. As far as I can tell, he does not. I’m pretty sure that he even points to the flagellum as an example of the filter at work.

In No Free Lunch , Dembski sets out to run the bacterial flagellum through the explanatory filter. He discusses the number and types of proteins needed to form the different parts of the flagellum and computes the probability that a random configuration will produce the flagellum. He concludes that it is so extremely improbable to get anything useful that design must be inferred. He admits that the assumptions and computations are simplified but as such simplifications arise in any mathematical model, we will not hold it against him.

Intelligent design and mathematical statistics: a troubled alliance | SpringerLink

Yup that’s right.

So that means Dembski agrees the Flagellum is not a single non-decomposable phenomena (as you put it, based on your quotes),

But the explanatory filter still applies (my quote), even though CSI can’t be meaningfully computed.

Notably, Dembski relies heavily on Doug Axe’s work to make the case that individual proteins can’t evolve, which we shown already to have serious errors.


That would be the Sharpshooter fallacy. They have no way of determining how many amino acid sequences will produce a specific function, nor do they have a way of determining how many functions can evolve.

This is best exemplified by the discovery of antibodies with beta-lactamase activity. With a few hundred million (<10^9) B-cell clones with random arrangements within their Fab domains they are capable of producing enzymatic activity that Axe claims can’t be attained in 10^64 attempts. Reality trumps Axe’s claims. This stresses once again how far off their probability estimates are.

Perhaps we should rename the EF to the Imagination Filter. Whatever ID supporters are incapable of imagining simply can’t happen.


I know this particular part of their argument isn’t the subject of this thread(this can be split of course if moderators deem it necessary), I’d just like to add that there are numerous reasons to think even that conclusion isn’t actually true.
Even if it really were the case that some particular protein function(some particular protein fold catalyzes some specific chemical reaction) is as rare as 1 in 10^77 sequences of similar length, it could potentially have evolved from some other protein function that happens to be nearby, or directly overlap it, in sequence space.
Another problem is there could be just as many potentially adaptive functions available to some organism. So it could be true that any particular one of those functions have a probability of 10^-77 of evolving, but if there’s 10^77 different possible functions, at least one of those is highly likely to evolve, even though any particular one of them is exceedingly unlikely.