Methinks it is sort-of like two weasels

See the opening post, the program that generates a sentence from a random string.

“An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway.” (Behe here)

The problem is that you are painting the bulls eye around the bullet hole, the Sharpshooter fallacy. You are focusing on just one outcome to the exclusion of all other possible outcomes.

What you and other ID proponents should be calculating is the probability of a new mutation interacting with an already existing neutral mutation and producing a beneficial phenotype. Not a specific beneficial phenotype, but any beneficial phenotype.

How many neutral mutations exist in the human population, and how many neutral mutations did humans inherit over the last hundreds of millions of years? How many new mutations will interact with these neutral mutations and produce beneficial phenotypes? It would seem to me that there are millions and millions of potential beneficial interactions, be it within the amino acid sequence of a single gene product, protein-protein interactions, protein-DNA interactions, and even protein-RNA interactions.

In addition, neutral mutations are continually becoming fixed in any given population, and before reaching fixation they can still be found in a large percentage of a population. The fixation of neutral mutations is the rule, not the exception.


But you can view events, which have occurred, either before the fact, or after the fact. Viewed before the fact, an event can be improbable.

So you cannot compute the probability of drawing a full house, viewing this event before the fact?

Well, Behe also examines the HIV virus, where one new protein-protein binding site did evolve, as it turns out. If it’s all that easy to produce a new binding site, then HIV could be cranking them out right and left.

But again, I’m sure if we saw a flagellum-in-process somewhere, we’d hear about it.

Right, I quoted that to show that he does take into account circuitous paths.

Again, Behe is observing what evolution actually did, where all paths were open to evolution.

Viewing the event before the fact, yes. No matter when we pick a specification, the probability, viewed before the fact, will be the same.

Which you could say Behe does, after deriving a rate for chloroquine resistance, Behe then applies this to a hypothesis concerning protein-protein binding sites, and turns to the malaria data set (both with humans and with the parasite), and with HIV, to see how the hypothesis is borne out.

Perhaps you would like to explain where Dawkins does so. The weasel program - which is covered in only a few pages in The Blind Watchmaker - is intended only to show the advantage of what Dawkins calls “cumulative selection” over “single-step selection” (which is essentially random guessing). It is not billed as or intended to be a simulation of biological evolution and Dawkins says that it is not accurate as such.

(It is interesting that criticisms focus on the relatively unimportant weasel program and not on the biomorph program that occupies the bulk of that chapter - and is intended to be closer to biological evolution, though still not a simulation).


That is certainly not a definition of design. Nor can it be considered evidence of design. After all, unselected steps will occur (quite frequently) and evolution is hardly going to care about whether it is including unselected steps or not.

Any argument to the contrary makes the mistake of confusing probabilities before and after the event. A particular unselected step may be unlikely before the event, but that does not mean that unselected steps do not occur nor does it mean that evolution cannot incorporate those that do occur.


Unsupported claim.


Picking a specification with knowledge of the event cannot be done before the fact. As I point out the a priori probability of such a specification being met is irrelevant to choosing design. The probability of the sequence having such a specification is the relevant probability. The probability of a sequence of 500 coin tosses having such a specification is clearly much higher than 2^-500.


The sample mean is not relevant, you are not using a likelihood for estimating a population parameter. The interpretation you are using for the likelihood is not correct.

Routes available to evolution are not part of your calculation, even to name the particular pathway you claim is improbable.
You further presume that all steps are “not selectable” with no justification. You do not consider the probability of “scaffolding” that is known to produce IC structures. You do not consider sexual mixing, which can combine sequences in ways that bypass “non-selectable” steps.

Most of these calculations are not possible outside of special circumstances such as phylogenetic analysis. This is why we don’t see biologists using this approach at all. ID researchers have no special knowledge of mathematics that could make this possible. I am not the first to make these criticisms.


Vision can be of existential importance, and so selection is highly operative. There is plenty of intermediate functionality, so while the camera eye is complex it is not irreducibly so. Form follows function, and function responds to environmental pressure. What would be the barrier to convergent evolution of the camera eye?


First off, cephalopods don’t have the vertebrate camera eye; they have a distinct cephalopod camera eye. Second off, Nilsson D., Pelger S. A pessimistic estimate of the time required for an eye to evolve. Proceedings of the Royal Society of London, Series B 1994; 256:53-58.

Oh, I’d like to point out that some arthropods (and members of several other phyla too) also have a form of camera eye, and some cephalopods have camera eyes that lack some of the components of the canonical camera eye.


ID methods are unique in that the probability of an event-type is considered to be smaller the more often it is observed. Normally we think that events which occur more often are more likely, not less.


Let’s pick 500 heads in a row before the fact.

The probability of a sequence meeting a specification is what is relevant.

But who talks about the probability of having a specification? Probability is about events.

But I was responding to people here defending it.

This is definitely a definition for design, you may say it doesn’t work, but it’s still a definition. But unselected steps are indeed more difficult to get through, and the more such steps there are in a row, the exponentially more difficult it is for evolution to get through them. Let’s say each step is probability .1 to get through, then two such steps would be .01 probability, and three such steps would be .001 probability, etc.

You are painting the bulls eye around the bullet hole. If we are talking about evolution, you would be calculating the probability of drawing a better hand than the other people at the table.


That’s the Sharpshooter fallacy. What you are ignoring is all of the beneficial interactions that didn’t happen. Using the same approach, you would be amazed that there are winners in so many lotteries because the chances of winning are 1 in 100’s of millions. What you would be ignoring is all the people that lost.

In the same way, there are many, many neutral mutations that have accumulated in any given genome. There may be many, many chances for a new mutation to interact with one of those accumulated neutral mutations and produce a beneficial phenotype. It is incorrect to single out the one beneficial phenotype that did evolve and calculate just the probability of that single phenotype.


Yes, I am, I am using the maximum likelihood value for estimating the mean, which is the sample mean.

No routes were excluded, two routes have been identified, and it’s reasonable to calculate the probability of resistance based on the rate at which it arises, 1 in 10^20.

The justification is that the chloroquine resistance rate is about the square of the atovaquone resistance rate. Evolution was not restricted in the interval of interest, scaffolding and any other path was available during this time.

Chloroquine resistance is not representative of evolution in general. Only very rarely are specific adaptations limited to just 1 or 2 mutations in the entire genome. Just looking at life in general will tell you there are many, many potential pathways for evolution to take, not just 1 or 2.


The improbability of the eye evolving in the first place.

Several comments: if the steps are independent, it is indeed remotely unlikely to develop just once! 99%^2000 is 1.86 x 10^-9.

At step 6 a lens appears! By magic?

Then we have:


The response obtained in each generation would then be R = 0.00005m, which means that the small variation and weak selection cause a change of only 0.005 % per generation. The number of generations, n, for the whole sequence is then given by 1.00005^n = 80 129 540…

© Copyright Original Source

But isn’t this assuming that the selectable variation per generation becomes fixed in each generation? But with a generation time of only 1 year, I don’t think there is enough time for this to happen. I would propose then the following:

1.0000 + (.00005 * (1 + .00005 * (1 + .00005 … [n times] = 80,129,540

So that each time, the variation for the next step occurs within the fraction of the population that varied in the current step.

Finally, as they admit, the neural processing required for the more complex eyes was completely left out! This is a crucial factor, and would substantially reduce the probability.

Which makes it more surprising if evolution did it…