Dembski and Swamidass: What is the Status of the Explanatory Filter?

Dembski has responded:

Unfortunately, Dembski talks about a rusting car instead of biology. I would still really like to see the EF applied to biology in a meaningful way.

Thanks. I will think about it and may respond, but I don’t personally want to muddy the water by saying more till I’ve thought about it.

1 Like

Looks like he’s trying to do damage control. I applied the explanatory filter to determine this.

5 Likes

Look at my description in the OP. It seems that Dembski agrees:

That seems to be precisely what he says in his response. Now look at the questions I asked.

It seems that he does not have an answer for the logical error I identified. It is notable, for this reason, that he does not link to this thread, even though he is responding to it.

1 Like

I agree, to insinuate that Mozart was not a sentient being (which is exactly what I was saying) is a bit of insanity. I’ll admit I do not understand in the slightest what you are getting at, @pnelson. We plainly have different ideas of what to means to be sentient.

1 Like

Hmmmm…
As we settle in for today’s Belmont Stakes, the question comes to my mind - chance, necessity, or design:

Hi @swamidass,

Notice that the EF considers Natural Process XOR Chance, but never Natural Processes AND Chance.

It seems to me that you’re not interpreting Dembski’s Explanatory Filter in the way it was intended. It seems to me that what Dembski had in mind was a single, non-decomposable phenomenon. Consider, for instance, a message in the sky, written in smoke and observed by a young woman: “Will you marry me?” The logic here is simple enough.

  1. Does any law of nature account for the message? No. Laws don’t generate semantics.
  2. Could chance account for the message? No. A “W”," maybe. A whole sentence? No way.
  3. Is the message a specified one? Because if it isn’t, then we might still rationally prefer chance. But it is specified, so chance is no longer a rational explanation. That leaves design as the only option: the message was left by a skywriter (presumably the woman’s boyfriend).

It would be very strange here if someone were to object that some combination of natural and chance processes might still have generated the message. This objection treats the message as if it were decomposable into parts that arose over different stages - a scenario that does not apply here.

So the question we need to answer is: are there any instances in nature of highly specified phenomena which are non-decomposable in the same fashion? I don’t think there are. But here’s my point: in principle, there could be. Take Douglas Axe’s claim that certain long-chain proteins are beyond the reach of chance: 10^-77 is the number that usually gets bandied around. It’s wrong, of course, as several authors have convincingly argued (including some readers of this post). But if (for argument’s sake) the number were correct, and if there were no “stepping stones” or viable structures of shorter length that could serve as a bridge, then the logic of the argument would apply.

Darwin himself acknowledged that his argument would be overthrown if it could be shown that there were any complex biological structures that could not have evolved by a series of steps. However, science has never uncovered any such structures, so Darwin’s naturalistic account of life looks quite safe.

If I were to criticize ID advocates for one thing, it would be this: a tendency to assimilate functional complexity to semantic complexity. The two are very different, despite both being specified. Functionality comes in grades. Meaning does not: a sentence either makes sense or it doesn’t.

2 Likes

Please show me quotes where he limits this to single non-decomposable phenomena.

Please show me how he demonstrated that biological origins, where he applies the filter, are single non-decomposable phenomenon. As I understand it, evolutionary science contests precisely this point, and would leave EF without any application within biology.

6 Likes

I agree with Josh, Evolutionary processes such as random mating, mutation, migration, natural selection, and genetic drift all go on every generation, just as Josh says, So having a method of detecting Design that only works when one processes occurs is useless.

6 Likes

… and, I should add, that restriction is not what Dembski did – he didn’t restrict his argument to “single, non-decomposable” processes. (He actually did not discuss evolutionary processes).

4 Likes

You want quotes? Happy to oblige. I have in front of me The Design of Life (Foundation for Thought and Ethics, Dallas, 2008), a text co-authored by William Dembski and Jonathan Wells. Only in chapter 7 do Dembski and Wells attempt to provide a mathematical argument for design. On page 196, they concede that it is impossible to estimate the probability that the bacterial flagellum evolved naturally, precisely because “Darwinists never identify detailed evolutionary pathways for such irreducibly complex biochemical systems.” In the absence of detailed pathways, “we are limited to more general probabilistic considerations” - in other words, listing all the hurdles that need to be overcome, for such a structure to come into being. However, that’s not a rigorously mathematical argument, as it is unquantifiable, and Dembski and Wells are well aware of this problem.

In paragraph 2 on p. 196, Dembski and Wells continue:

“In estimating probabilities for the evolution of biochemical systems by Darwinian processes, we therefore need to analyze structures even simpler than the flagellum. The place to look is the improbability of evolving individual proteins.

The authors then concede that this kind of research has yet to be done for the proteins that make up the bacterial flagellum or other biological machines, adding: “As a consequence, forming hard estimates for the improbability of evolving the flagellum and other irreducibly complex systems remains an open problem.

However, Dembski and Wells continue, “hard estimates for the improbability of evolving certain individual proteins are available. Proteins reside at just the right level of complexity and simplicity to determine, in at least some cases, their improbability of evolving by Darwinian processes.” They go on to add that what determines the evolvability of proteins is “not merely how sparsely proteins are distributed among all possible amino acid sequences, but also, and more importantly, to what degree proteins of one function are isolated from proteins of other functions.” (p. 196)

On page 199, Dembski and Wells resume their mathematical argument:

“Does any experimental evidence confirm that larger proteins may be unevolvable? Such evidence exists. Research by molecular biologist Douglas Axe that a domain of circa 150 residues of the protein TEM-1 beta-lactamase is unevolvable by Darwinian processes.”

They then describe how Axe arrived at a figure of 1 in 10^64, which identifies “the improbability of evolving by Darwinian processes a working protein domain with the same pattern of hydrophobic interactions characteristic of his beta-lactamase domain.” They add: “Thus, for all practical purposes, there’s no way this domain could have evolved by Darwinian processes.” (p. 200)

More importantly, om page 203, Dembski and Wells concede that Behe’s argument for the design of irreducibly complex systems contained “a loophole,” which Axe’s argument closes: namely, that “for most of Behe’s systems, one can identify subsystems that might perform a function of their own.” For that very reason, “Darwinists point to these subsystems as possible evolutionary precursors to the systems in which they are embedded (e.g., Darwinists point to the type-three secretory system as a precursor to the flagellum…).” By contrast, “the domains studied by Axe have an integrity that admits of no functional subdivisions and thus leaves Darwinian evolution with no plausible precursors.” (p. 203)

From the foregoing, I think it is clear that Dembski and Wells are conceding that any rigorously mathematical argument for design has to be based on what I describe as “single non-decomposable phenomena” - for instance, the function of a single protein. That’s why the complaint that Dembski’s Explanatory Filter fails to consider “Natural Processes AND Chance” is beside the point, in my opinion. That would be an entirely valid point if one were considering the bacterial flagellum, but Dembski and Wells are talking about a system (a long protein) which they believe (on the basis of Axe’s figures) could not possibly have evolved in a sequence of steps.

Of course, it’s now generally agreed that Axe’s figures are wrong by many orders of magnitude. But if his numbers were right, Dembski and Wells would have an argument.

The best critiques of ID, in my opinion, are not philosophical ones but mathematical ones, relating to empirical data such as proteins.

2 Likes

@vjtorley no place here have you discussed the Explanatory Filter, which was his book in 1999. It seems the topic has drifted to CSI, which is a different concept entirely. So it does not really address my question to find quotes where Dembski limits the Explanatory Filter to “ single non-decomposable phenomena ”. As far as I can tell, he does not. I’m pretty sure that he even points to the flagellum as an example of the filter at work.

In No Free Lunch , Dembski sets out to run the bacterial flagellum through the explanatory filter. He discusses the number and types of proteins needed to form the different parts of the flagellum and computes the probability that a random configuration will produce the flagellum. He concludes that it is so extremely improbable to get anything useful that design must be inferred. He admits that the assumptions and computations are simplified but as such simplifications arise in any mathematical model, we will not hold it against him.

Intelligent design and mathematical statistics: a troubled alliance | Biology & Philosophy

Yup that’s right.

So that means Dembski agrees the Flagellum is not a single non-decomposable phenomena (as you put it, based on your quotes),

But the explanatory filter still applies (my quote), even though CSI can’t be meaningfully computed.

Notably, Dembski relies heavily on Doug Axe’s work to make the case that individual proteins can’t evolve, which we shown already to have serious errors.

2 Likes

That would be the Sharpshooter fallacy. They have no way of determining how many amino acid sequences will produce a specific function, nor do they have a way of determining how many functions can evolve.

This is best exemplified by the discovery of antibodies with beta-lactamase activity. With a few hundred million (<10^9) B-cell clones with random arrangements within their Fab domains they are capable of producing enzymatic activity that Axe claims can’t be attained in 10^64 attempts. Reality trumps Axe’s claims. This stresses once again how far off their probability estimates are.

Perhaps we should rename the EF to the Imagination Filter. Whatever ID supporters are incapable of imagining simply can’t happen.

3 Likes

I know this particular part of their argument isn’t the subject of this thread(this can be split of course if moderators deem it necessary), I’d just like to add that there are numerous reasons to think even that conclusion isn’t actually true.
Even if it really were the case that some particular protein function(some particular protein fold catalyzes some specific chemical reaction) is as rare as 1 in 10^77 sequences of similar length, it could potentially have evolved from some other protein function that happens to be nearby, or directly overlap it, in sequence space.
Another problem is there could be just as many potentially adaptive functions available to some organism. So it could be true that any particular one of those functions have a probability of 10^-77 of evolving, but if there’s 10^77 different possible functions, at least one of those is highly likely to evolve, even though any particular one of them is exceedingly unlikely.

2 Likes

The 2005 statement of SC (a version of declaring CSI to be present) requires that we compute the probability that a function could evolve to be this good or better. Dembski and Wells admit that this cannot be done at present. The EF would still work but my take is that to eliminate either “necessity” or “chance” or a combination of them, we need to show that the probability of a function this good or better could evolve is less than 1/2 in the life of our universe. So EF and CSI/SC are related, but both can’t be made to work. So in practice Dembski’s argument turns into citing Axe or Behe’s argument.

4 Likes

Hi @swamidass,

I’ve never read Dembski’s No Free Lunch, which was written in 2001, or about six years before I got involved with the ID movement. All I know is that in The Design of Life, (2008), Dembski and Wells admit that a rigorous probability computation for the bacterial flagellum is impossible.

P.S. I’ve just unearthed a 2002 paper by Dembski, titled, Still Spinning Just Fine, in which he alludes to a back-of-the-envelope calculation of 10^(-1170) for the probability of the bacterial flagellum - a figure he amends in the paper to 10^(-780) upon learning that only 2/3 of the proteins in the flagellum are unique. But later on in the paper, he acknowledges:

“Miller, it seems, wants me to calculate probabilities associated with indirect Darwinian pathways leading to the flagellum. But until such paths are made explicit, there’s no way to calculate the probabilities.

And again:

“… [I]f a forward chaining search succeeds, it does so as a highly specific and isolated path through genomic space. In that case the step-by-step probabilities moving forward from A_i to A_(i+1) could still be large enough not to overturn my universal probability bound. But absent a successful forward chaining search, there is no reason to think that success is even possible. Successful forward chaining assumes that a sequence like A_1 through A_n and can be made explicit. There is no evidence of this.”

Hi @dga471,

In response to your earlier point about not everything being designed under ID, Dembski addresses this very point in the paper I cited above:

“Intelligent design is even compatible with what philosophers call an occasionalist view in which everything that occurs in the world is the intended outcome of a designing intelligence but only some of those outcomes show clear signs of being designed. In that case the distinction between natural causes and intelligent causes would concern the way we make sense of the world rather than how the world actually is (another case of epistemology and ontology diverging).”

I hope that helps.

Hi @T_aquaticus,

That would be the Sharpshooter fallacy. They have no way of determining how many amino acid sequences will produce a specific function, nor do they have a way of determining how many functions can evolve.

Good point. However, I believe Axe did attempt to meet some of these concerns in his 2010 BIO-Complexity paper, The Case Against a Darwinian Origin of Protein Folds. Not being a scientist, I’m not able to evaluate the technical details of his argument, however.

Hi @Rumraket,

Even if it really were the case that some particular protein function(some particular protein fold catalyzes some specific chemical reaction) is as rare as 1 in 10^77 sequences of similar length, it could potentially have evolved from some other protein function that happens to be nearby, or directly overlap it, in sequence space.
Another problem is there could be just as many potentially adaptive functions available to some organism.

Axe addresses some of these points in his 2010 paper.

1 Like

He starts at a peak in a fitness landscape and wonders why you don’t get activity as you move down the fitness peak. IOW, Axe focuses on just one solution that nature has found and pretends as if that is the only solution.

1 Like

Well he certainly attempts to do so, but one problem is Axe often times speaks about specific protein folds with specific functions, instead of just focusing on functions.

What matters to an organism(and to the efficacy of evolution) is the fitness-effect of any potentially adaptive function in it’s current environment, not whether those functions are performed by particular folds, nor even what the specific function is. The question of novel protein evolution ultimately comes down to this question: Is there something functional and adaptive in the sequence-neighborhood of any expressed genomic locus?

One can get a sense of what the answer is with a simple thought experiment. Imagine changing the DNA sequence to a totally random one, produced by a monkey with a 4-key typewriter. The new sequence would be a 25% match. What would its effect on function or fitness be? Obviously disastrous. Now imagine making only single-site changes. Would that have a less-disastrous effect? Obviously yes. So as bad as local changes in the space of DNA sequences are, they are nowhere near as bad as jumping to a random place in the space of all possible DNA sequences of that length. So the space of fitnesses is locally far smoother than a “white-noise” fitness surface.

4 Likes

Well, isn’t that interesting? It’s clear that merely questioning the common account of MN is not enough, as it leaves us with this divergence. Why then is ID accepting a defective epistemology? Shouldn’t it switch to an epistemology where everything actually looks designed (as most Christians believe it to actually be the case), and thus epistemology and ontology are no longer diverging? Why should we continue to use this distinction between “natural” and “intelligent” causes?

My guess is that a move, while theologically and philosophically more consistent, would make ID not as “scientific”, and thus unable to fulfill the deep yearning of many ID theorists for wide acceptance by the scientific community. Therein lies the problem.

4 Likes