Methinks it is sort-of like two weasels

Despite the “but” that doesn’t change anything I’ve said. The point is that it is the probability of meeting the specification that matters not the probability of the particular path taken to reach it.

Certainly not. The prediction is a valid specification in itself - because it is made with no knowledge of the outcome. It is not just “separable” - it is completely separate.

1 Like

It is the joint probability of the data observed. This can be used to estimate parameters (Maximum Likeihood), including the population mean. The value of the likelihood itself has no interpretation other than the joint probability of all the data. It does not imply the final observation in a series is improbable. That would be equivalent to saying that you are improbable because you could only be the result of a single sperm from your father and a single egg from your mother.

Likelihoods are also used in Likelihood Ratio tests, the ratio of likelihood for two models for the same data. Bayes Factors adjust this for a prior assumption.

There are other difficulties with your calculation: it assumes IID events, ignores selection, ignores “indirect” routes, and ignores deletions. Outside of very specific types of data (phylogenetic analysis). It lacks any definition of “design” that would allow the likelihood of design to be calculated for a likelihood ratio test*. We can force a Bayes Factor by making very silly assumptions, which it the closest “valid” interpretation for what your calculation actually means.

* With the possible exception of a Dependency Graph model (Ewert 2016), but this has other problems.

1 Like

Having now seen how Behe derived that 10^20, it really isn’t the probability of anything.


I agree with this, however, I do insist that basic resistance requires one of two paths, each with two non-selectable mutations.

“But these could be separated into two main lineages” is indeed two pathways.

Quote, please?

Not at all, he observed what evolution actually did, both with existing variation and with new miutations.

Well, you have to mark some outcomes, to compute a probability.

Yet when the hierarchy isn’t nested (as in the camera eye of cephalopods and vertebrates), reuse of design is shown to be likely there, and so why not elsewhere? One good counterexample overturns evolution, while nested design does not overturn design.

But you point to me existing, which is after the fact, and then say “how improbable!”, which is before the fact. You have to choose one of these perspectives, and stick to it.

Yet what is undoubtably meant is a new beneficial protein-protein interaction:

“Conversely, in its battle with poison-wielding humans, the malaria genome has also been terribly scarred. In the past half century a number of genes have been broken or altered to fend off drugs such as chloroquine. As discussed in Chapter 3, none of the changes seem to be improvements in an absolute sense. They disappear once drug therapy is discontinued. Has the war with humanity caused malaria to evolve any new cellular protein-protein interactions? No. A survey of all known malarial evolutionary responses to human drugs includes no novel protein-protein interactions. Although, as above, it can’t be ruled out that some such thing developed in the past, no such change persisted, so none could have been as effective as the damaging changes discussed earlier.” (The Edge of Evolution, p. 136)

I certainly haven’t done a survey, but if one of all of the investigators of bacteria were to see such a thing, it would be major news.

But he doesn’t ignore it, “Because the cilium is irreducibly complex, no direct, gradual route leads to its production. So an evolutionary story for the cilium must envision a circuitous route, perhaps adapting parts that were originally used for other purposes. Let’s try, then, to imagine a plausible indirect route to a cilium using pre-existing parts of the cell.” (Darwin’s Black Box, p. 62)

Why? Walk us through your logic.

Then we agree that Behe is wrong.

You have zero evidentiary basis for your assertion. What Behe says is not evidence.

Not ONLY two pathways, as you claim.

Read White’s review. It’s obvious. Also, Behe’s citing the secondary instead of primary literature is incredibly unscholarly.

Show me how Behe accounted for existing variation MATHEMATICALLY.

1 Like

Sure, but “success” implies a goal. The evolution of resistance is not teleological.

Cephalopod and vertebrate are in themselves terms which group organisms by defining characteristics according to a nested hierarchy. Eyes are organs, not organisms.

Was design reused for ciliary and rhabdomeric photoreceptors? The focusing mechanism? Retina and optic nerve arrangement? Cornea? There are striking similarities and differences, which is what we would expect from parallel and convergent evolution of active predators.


But that’s what you are doing with the flagellum, which already exists, and saying “how improbable”, which is before the fact. As I keep saying this is an own goal you’re doing. The point of the analogy with your own existence is to make it more obvious why doing the calculation after something already exists, is meaningless.

Nothing here constitutes a meaningful response to what I wrote. The fact that the malaria parasite has not evolved a new protein-protein interaction as a means of resistance to the chloroquine antibiotic has zero bearing on whether protein-protein interactions are generally prohibitively unlikely to evolve. All that tells you is that a protein-protein binding interface is not a means by which chloroquine resistance evolves in malaria.
There’s a mountain of difference between asking whether a protein-protein interface can evolve if it is beneficial, compared to whether such an interface happens to give resistance to chloroquine.

There’s no reason to think protein-protein binding interfaces are difficult to evolve at all, and I cited experimental evidence and realistic modeling work that shows this.

Well I think the fact that you acknowledge that you don’t really know whether a flagellum is in the process of evolving anywhere takes the air out of that balloon. There are many other problems with this “why isn’t it happening right now before my eyes?” kind of reasoning, for example that bacteria can acquire flagella from each other by horizontal gene transfer(if you can just acquire one from your neighbor there’s no pressure to evolve another), also rather significantly undermines any putative inference about what can/can’t happen in principle.


That’s about bacterial flagella, Dory, not about chloroquine resistance. Neither Behe nor you has any idea what alternative paths to chloroquine resistance there might be, that are not included in his or your calculations.


Not even. The cilium is a eukaryotic structure.


Oops! You’re right. My mistake. It doesn’t make @lee_merrill’s remark any more relevant though.


So now I’m going to say what’s wrong with specification.

The basic idea is that a valid specification is as good as a prediction. Intuitively it might be convincing - if there’s an odd pattern and it’s really unlikely then that’s a reason to think that something is going on. The problem is that that sort of thinking uses the wrong probability.

So, let us consider 500 coin tosses. Any specification that completely defines the sequence will meet Dembski’s universal probability bound.

A prediction of the exact sequence is fine of course - there is only one sequence that can count as a success.

But suppose we get to choose a specification after the fact? 500 heads will work. So will 500 tails. Or alternating heads and tails - either way round. That’s already 4 and it’s not hard to think of more. In fact the number has a lot to do with how many patterns you could find and accept as valid specifications (which has a subjective element). The probability of one of those sequences occurring is going to be much higher than Dembski’s universal probability bound. And that is really the probability you should be using.

The typical scientific approach would be to create a hypothesis based on the data - and to use that to predict the result of subjecting a different - unexamined - data set to the same analysis. That avoids the problem since the specification is not based in the data it is compared to at all.

Dembski’s approach more closely resembles the practice of “p-hacking” (as seen on xkcd). Which is frowned upon for obvious reasons.

As a side note, while the simplest specifications are arguably the best as specifications - because the more complexity you allow the more possible specifications there will be - they aren’t the best for inferring design. The simpler the specification, the more likely it is to be accounted for by a regularity. The example of pulsars comes to mind. So the pool of possible specifications - remembering that they are chosen with 20/20 hindsight - needs to be quite large.


But you can compute the probability of outcomes of various processes.

But the similarities are enough to make it improbable that the camera eye evolved twice.

The vertebrate camera eye occurring separately in cephalopods, it is improbable for the eye to develop once, much less twice.

[quote=“Mercer, post:134, topic:15043”]

No, this only shows that there are subsequent mutations that improve resistance, once resistance is in place.

Only two that seem to be probable enough to occur.

Again, he observed what evolution actually did, starting with existing variation. Thus all that is needed is to observe the rate at which resistance evolves.

Well, again, the most likely estimator for the population mean is just the sample mean.

But all routes were open to evolution, none of them were ignored.

See the opening post, the program that generates a sentence from a random string.

“An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway.” (Behe here)

The problem is that you are painting the bulls eye around the bullet hole, the Sharpshooter fallacy. You are focusing on just one outcome to the exclusion of all other possible outcomes.

What you and other ID proponents should be calculating is the probability of a new mutation interacting with an already existing neutral mutation and producing a beneficial phenotype. Not a specific beneficial phenotype, but any beneficial phenotype.

How many neutral mutations exist in the human population, and how many neutral mutations did humans inherit over the last hundreds of millions of years? How many new mutations will interact with these neutral mutations and produce beneficial phenotypes? It would seem to me that there are millions and millions of potential beneficial interactions, be it within the amino acid sequence of a single gene product, protein-protein interactions, protein-DNA interactions, and even protein-RNA interactions.

In addition, neutral mutations are continually becoming fixed in any given population, and before reaching fixation they can still be found in a large percentage of a population. The fixation of neutral mutations is the rule, not the exception.