Winston Ewert develops his dependency graph model further

But Behe rejects all but one of White’s underlying mechanisms without mentioning them. Why?

I don’t see a calculation to that effect. And Behe ignores those details without telling his readers that he does so. That’s not how real scientists write.

No, we’re pointing out that Behe is misrepresenting this number. Blatantly.

Did you read White’s review before trying to argue for Behe?

I would tend to agree. Nicholas White would appear to be a physician and researcher specialising in the treatment of malaria (and other tropical diseases), rather than a geneticist (or similar) specialising in estimating the probabilities of mutations occurring in certain parasites. I would suspect that, if this “very rough guesstimate” was the main focus of White’s paper, rather than merely an inconsequential aside, it might well have failed peer review.

Thus I cannot help but think that Behe’s employment of “1 in 1020” as an authoritative figure, and @Giltil defense of it as " the best estimate by … one of the world’s foremost malariologists", are both making fallacious appeals to false authority.

True, it is possible to create cyclical dependencies. So why do I think AminoGraph is restricted to DAGs? First, that is the label it assigns to alignments which it assesses are neither ‘star’ nor ‘tree’; they get labeled ‘dag’ in the output.

Second, I consider this passage to indicate indirectly that cycles are avoided:

Efficiently tracking whether a given change counts as an override is difficult. Accordingly, AminoGraph uses an approximation. Each change has a “depth” that is incremented by one every time it is overridden. Changes with more depth are allowed to override changes with less depth. In cases in which a more derived module alters an inherited change, the depth will, by definition, be increased and thus have higher precedence than the changes it overrides.

As I understand that, a given module can only therefore depend on modules of smaller depth. I believe this would avoid constructing graphs with cycles.


Quoting this for emphasis. This has now been stated in this thread multiple times, and similar points made on this forum in general many dozens of times over the last 3-4 years. Will there be some future where our resident ID-proponents/creationists of various stripes start showing signs of understanding it?


It looks like the given tally of spontaneous rise of resistance relates to clinical presentations, which would represent some degree of fixation. How many of these mutations happen for every one that becomes established, seems to me with the data we have, to be anybodies guess. How many occurred in patients who died and were cremated and that was the end of that? How many mosquitoes just failed to find another meal? How many occurred in communities without access to medicine and therefor no selective pressure?

True this. Neither the rate of mutation, and the efficiency of a given selective pressure, are universal.


I believe I have mentioned this previously, but many years ago, around the time Behe’s book was first published, I sent White an email summarizing Behe’s argument and asked for his comments. His response was succinct: “Sounds nuts!”

Precisely. This was exemplified by Behe’s childish response to his critics, which was “Well, let’s see your calculation of the odds.” That this challenge was not issued in good faith became clear when Behe conceded nothing after it had been met by several people.

Since Behe hasn’t, why would we expect them to? As we all know, they just blindly parrot whatever their favourite ID hero says, and make no real attempt to understand the counterarguments made against him.


The mutation rate Behe uses for malaria is 10^-8, which is quite generous considering that the mutation rates in multicellular eukaryotes are most probably less than that. Also, the selective pressure of CQ on P falciparum is obviously very high, arguably much higher than the selection pressure associated with the acquisition of many traits in higher animals, for example echolocation. These two considerations strongly support Behe´s contention that any particular adaptive biochemical feature requiring the same mutational complexity as that needed for chloroquine resistance in malaria is forbiddingly unlikely to have arisen by Darwinian processes and fixed in the population of any class of large animals (such as, say, mammals), because of the much lower population sizes and longer generation times compared to that of malaria


How many CQR-resistant variants were outcompeted by CQR-sensitive parasites in mosquitoes? There would be no advantage to resistance there, and probably a big disadvantage. This would be an experiment Behe himself could do with DI funding.

1 Like

An error is an error, it doesn’t matter how many times it has been uttered. And the error here is to claim that the frequency with which CQ resistance arises in P falciparum has no bearing at all on the plausibility that larger animals can develop traits of similar mutational complexity.
The only way for evolutionists to cope with this difficulty is to contest the idea that adaptions of the same mutational complexity existed at all for higher animals. But such an argument would be biologically nonsensical.

That’s extremely confused. We are disputing the claim that any “larger animal” actually has an adaptation with a similar “mutational complexity” of CQ resistance in P falciparum that would mean it could not have evolved as it would be outside the “edge” of evolution.

You can’t just wave your hand in the direction of CQ resistance in P falciparum and then pretend some attribute of some fish, say, is similarly “mutationally complex” and outside the edge of evolution of the species in which it evolved.

The color phenotypes of the peppered moth that switched during the industrial revolution, for example, just aren’t the same as CQ resistance in P falciparum.

Existed? As in, somewhere in sequence space, there was an adaptation that could have been produced if only some sufficiently unlikely series of mutations happened, but since it was too unlikely it never did? I’m sure there are such potential adaptations that never evolved because they were too unlikely.

However, there’s no evidence any known adaptation is of such a complexity that it is too unlikely to have evolved when it (is implied by some phylogeny to have) did.

What evidence do you have that any known adaptation in a “higher animal” is of a “mutational complexity” that means it’s outside the timeframe in which that adaptation evolved? Find one and show that it is. Otherwise you’re literally just imagining things with no evidence.


No. The error here is to claim that the frequency with which CQ resistance is known to arise is simply the probability of two specific mutations. White wrote the former, not the latter. Behe is grossly misrepresenting White.

White also wrote this in the paper Behe cited. Behe pretends items 2-8 simply don’t exist.

When ethical scientists disagree with someone, they don’t cherry-pick a number from a review and pretend that it means something very different from what the author meant. Those who tout them avoid this.

Remember, you wrote:

The second and third are clearly not true.


That is not clear to me. Malaria is often endemic to regions where there are people groups who live at subsistence, or for other social-political reasons, do not typically have access to proper medical care. Beyond that, nobody likes anti-malaria meds as a prophylactic, due to side effects. As there is latency between infection and illness, that means even where CQ would be administered there is an interval where the parasite multiplies unchecked. The selective pressure of CQ on P falciparum is zero where CQ is absent in the host. This is another material consideration that does not appear to register in the calculation. Behe’s number seems as rigorous as whatever number you pull out of the Drake equation - you could plug in virtually any factors you please.

10^-8, huh?

So the probability of getting a specific mutation would be (10^-8)/3.[1] And the probability of getting that specific mutation and one of two other specific mutations would be 2*(10^-8)(10^-8)/33.

That’s 1 in 4.5*10^16. Not 1 in 10^20.

So Behe isn’t using that mutation rate to calculate the probability of chloroquine resistance. In fact he’s not doing a calculation at all, he’s just copying White’s rough estimate based on observed occurrences - which is the probability of chloroquine resistance occurring and spreading, and which is orders of magnitude different from the initial occurrence rate. So are you.

Why are you (and Behe) using a number that based on your own input data is three orders of magnitude out?

Saying it’s arguably much higher is an opinion, not an argument. An actual argument that it’s much lower is that individual P. falciparum organisms frequently never encounter chloroquine so have no selection pressure to handle it whatsoever, whereas mature insectivorous bats always need echolocation or they starve.

P.S. ‘malaria’ doesn’t have a mutation rate. ‘P. falciparum’ does.

  1. Assuming equal probabilities of mutating to one of the other three nucleotides and that only one other nucleotide can lead to the replacement amino-acid. ↩︎

1 Like

Here is what Behe means by “mutational complexity”:

(By “the same mutational complexity” I mean requiring 2-3 point mutations where at least one step consists of intermediates that are deleterious, plus a modest selection coefficient of, say, 1 in 10^3 to 1 in10^4. Those factors will get you in the neighborhood of 1 in 10^20.)

Please explain why denying that such adaptions occurred is “biologically nonsensical”.

Further, there is an additional problem in Behe’s assertion. The probability that one such adaption will occur in one of the many species of “higher animal” is rather greater than the probability that a particular adaption of that sort will arise in a particular species. And Behe has not adequately accounted for that.


It happens that Behe is perfectly aware that the per-parasite occurrence of de novo resistance to CQ is made up of several components. I quote him:
** The best estimate of the per-parasite occurrence of de novo resistance is Nicholas White’s value of 1 in 1020. This number is surely made up of several components, including: 1) the probability of the two required mutations identified by Summers et al. coexisting in a single pfcrt gene; 2) the value of the selection coefficient (which can be thought of as the likelihood that the de novo mutant will successfully recrudesce in a person treated by chloroquine and be transmitted to another person); and 3) the probability of any possible further PfCRT mutation needed to confer chloroquine resistance in the wild coexisting in the same gene with the other mutations.*
** The known point mutation rate of P. falciparum, combined with the apparent deleterious effect of the required mutations occurring singly, suggests that component 1 from the previous bullet point will account for the lion’s share of White’s estimate, probably at least a factor of 1 in 1015-1016 of it. The other factors would then account for 1 in 104-105. These values are somewhat flexible, accommodating the uncertainty in our knowledge of the exact values in the wild. In other words, a decrease in our best estimate of the value of one factor can be conceptually offset relatively easily without affecting the argument by supposing another factor is larger, to arrive at 1 in 1020.*

In addition to your errors which have been well-explained by others above, I will remind you of another problem with Behe’s argument: Even if he is correct about the frequency with which CQR evolves in P. falciparum, this does not define the frequency with which any CCC trait will evolve even in P. falciparum. That would require knowing how many possible CCC traits exist within the P. falciparum genome.

It’s as if Behe were to say “When I bought my lottery ticket, it said my odds of winning are 1 in 1 million. But about every 10th drawing someone wins. That means the lottery is fixed!” Hopefully, someone would explain to him that, if 100,000 people are buying a ticket to the same lottery, then the odds of someone winning are 1 in 10, and therefore the outcome is not surprising at all.

The really remarkable thing about Behe’s “Edge” argument is that it fails at so many, different levels. Truly, his crowning achievement.


Hi Gil
Here are my thoughts on the lottery argument as it is used to counter Behe’s argument. Any comments?

The lottery would go out of business if you went from 5 balls to 50 as most likely no one would ever win even if the entire population was buying tickets. Behe’s argument shows how only a few mutations can be a problem but if you carefully read his paper (2004 with David Snokes) the most sensitive model parameter is how many changes are required for the adaption. Biology is dealing with exceedingly long sequences that we empirically know are often very different yet still have biological function.

What evolutionists are ignoring is understanding why an adaption can require 10^20 changes despite only requiring 2 specific changes.

He ignored it IN THE BOOK, Gil, so there’s no evidence that he was “perfectly aware” until this was pointed out to him. In fact, that makes Behe look more dishonest, not less.

But despite claiming that you “avoid nothing,” you avoided noting that the source of your quote is not the book. Interesting.

Behe has zero evidence that this was not an existing polymorphism in the population, perhaps maintained by epistatic interactions. If that was the case, the mutational probability is ONE.

Behe’s handwaving is further demolished here:


Understanding Michael Behe

Sorry, we were unable to generate a preview for this web page, because the following oEmbed / OpenGraph tag could not be found: description


The known point mutation rate of P. falciparum, combined with the apparent deleterious effect of the required mutations

Notice that Behe still made this claim TWICE in that passage, falsified by the Summers et al. paper cited above.


It’s interesting to note how easily @GIltil is deceived by this. “Component 1 from the previous bullet point” is the entirety of what Behe is trying to calculate. So that the other numbers can be fudged up or down to end up at 1020 really doesn’t help Behe out at all.

That passage amounts to Behe admitting he really has no clue whether the figure he is using is remotely correct. He could be off by 4-5 orders of magnitude, at least! “But, hey, no problem! Let’s just pretend I am right anyway!” And @Giltil is more than happy to play along.

Another sneaky trick that Behe tries to play there: When he says the frequency of the two mutations occurring is 1 in 1015 to 1016, this is based on his (erroneous) estimate of the frequency of a single mutation being 1 in 108. He is squaring this figure to obtain the rate for two mutations. However, this can only be the case if we are calculating the rate of both mutations occurring simultaneously in the same organism. That is to say, Behe’s figure already assumes that both mutations are so deleterious on their own that they must occur simultaneously, even though elsewhere he denies basing his argument on this assumption. When he refers to the “apparent deleterious effect of the required mutations occurring singly”, this is actually not apparent in any evidence Behe provides himself (and, in fact, this was outright refuted by the Summers paper that Behe bizarrely claimed vindicated his argument). This deleterious effect is only “apparent” to Behe because he assumes his figure for the odds of the two mutations occurring is correct.

I have often said that the primary contribution of ID proponents in these discussions is in providing examples of how they allow themselves to be deceived. This is a particularly revealing instance.


We are not. See my answer to @RonSewell at 155.