Evolution News & Views is gone

I think Dembski’s CSI argument is actually completely sensible, if you grant one condition. (But as we will see this condition turns out to be ridiculous). The condition is that the only evolutionary process is mutation. So the sequences in a population are then just wandering randomly in sequence space. Specified Information then calculates, for all the sequences, the fraction of them that have a level of adaptation as high as we see in a natural population. For example, what fraction of them can fly as well as a sparrow? It will immediately be obvious that this fraction is incredibly small – monkeys typing sequences with four-key typewriters will essentially never make an organism that flies as well as a sparrow. Dembski compares this probability to 1 in 10^150 (or so), which is intended to be 1 in the number of states of particles that there have ever been in the whole history of the universe. I agree with this logic – if you said that random mutation had made a sparrow, that would be an extremely implausible hypothesis.

So we don’t have n = 1, we have n = 10^150. If we can show that P < 1/n, we have established that the sparrow flies so well that these natural processes could not have plausibly found a genetic sequence that good or better.

But what about natural selection? Well, this argument does not rule it out as able to make the sparrow. And there is the fatal flaw.

So Dembski’s calculation makes perfect sense, with that one condition, that there is no natural selection. Wihout it, the CSI argument works fine.

And what about ASC? Well there seems to be some argument that a simple algorithm leading to the oberved phenotypes (or genotypes) is a hallmark of Design. It might be, for some designers. But there is no reason to think that it might also result from natural selection. And all that is never explained in the Dembski/Ewert volume, which is supposed to be the latest and greatest.

4 Likes

A post was merged into an existing topic: Evaluating Kurt Wise’s Devotional Biology Textbook

Random mutation without gene duplication?

Well, this is a response to a post that has disappeared into an ancient thread for which it’s decidedly off-topic. What gives?

2 Likes

I think you missed me. I’m talking about putting CSI into a Neyman-Pearson framework for a Uniformly Most Powerful test where there can be a comparison against a null hypothesis. Dembski always uses a single sample (n=1) and according to statistical theory the Type II error will be the same as the Type I error rate.

In other words, the probability of rejecting the null when the null is false is equal to the chance of falsely rejecting the null when it is true. It becomes a randomize “coin flip” with a biased coined (probability “heads” = type I error rate). Results are indistinguishable from random.

Neyman-Pearson tells us how to make the best possible inference in theory, so CSI cannot escape the conclusion even if it is not a formal Likelihood Ratio test. The statistical properties of CSI can only be terrible.

At least when n=1. We can’t just jump to n=10^150, we need actual data from a (random) sample. If ID stated some hypothesis that could be parameterized, then we might come up with a Maximum Likelihood Estimate for the parameter based on n>1, and then we could have a real statistical test of ID on our hands.

Others have come at this from the other direction (Theodoros, White) by testing Common ancestry versus (a parameterization of) separate origins. This approach is much easier, because it starts with established methods to test real data, rather than trying to invent new methods as Dembski does.

I still disagree with you here, or at least I think we need to grant more than just one condition. I won’t chase down that rabbit hole just now, but we’ve discussed previously on PT.
The point I’m trying to make is that CSI can only be a terrible method by any statistical measure of “good” inference.

It is true that Dembski’s test is intended to test one sequence at a time. He sets the tail so that the probability of being in it is extremely small, P < 10^(-150), the idea being to rule out “chance”". That undoubtedly means that a designed sequence would often fail to be recognized as designed. I think in part he is being conservative, but also we have to realize that we see lots of things around us that we won’t even try to test (rocks, for example) so we have chosen something that is unusually well-adapted.

We don’t really have a Neyman-Pearson hypothesis test because there is no clear way of computing probabilities under “design”.

And of course, natural selection cannot be ruled out as the possible cause of “design”.

1 Like

No it is not, and definitely not “Uniformly Most Powerful”.

Where I’m heading with this, is that even if CSI were a UMP test, and we let slide “probability under Design”, it can only be terrible test with a sample size of n=1. Dembski may ignore statistical theory, but it still applies. I might try to write this up more formally.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.