Methinks it is sort-of like two weasels

Oops! You’re right. My mistake. It doesn’t make @lee_merrill’s remark any more relevant though.

2 Likes

So now I’m going to say what’s wrong with specification.

The basic idea is that a valid specification is as good as a prediction. Intuitively it might be convincing - if there’s an odd pattern and it’s really unlikely then that’s a reason to think that something is going on. The problem is that that sort of thinking uses the wrong probability.

So, let us consider 500 coin tosses. Any specification that completely defines the sequence will meet Dembski’s universal probability bound.

A prediction of the exact sequence is fine of course - there is only one sequence that can count as a success.

But suppose we get to choose a specification after the fact? 500 heads will work. So will 500 tails. Or alternating heads and tails - either way round. That’s already 4 and it’s not hard to think of more. In fact the number has a lot to do with how many patterns you could find and accept as valid specifications (which has a subjective element). The probability of one of those sequences occurring is going to be much higher than Dembski’s universal probability bound. And that is really the probability you should be using.

The typical scientific approach would be to create a hypothesis based on the data - and to use that to predict the result of subjecting a different - unexamined - data set to the same analysis. That avoids the problem since the specification is not based in the data it is compared to at all.

Dembski’s approach more closely resembles the practice of “p-hacking” (as seen on xkcd). Which is frowned upon for obvious reasons.

As a side note, while the simplest specifications are arguably the best as specifications - because the more complexity you allow the more possible specifications there will be - they aren’t the best for inferring design. The simpler the specification, the more likely it is to be accounted for by a regularity. The example of pulsars comes to mind. So the pool of possible specifications - remembering that they are chosen with 20/20 hindsight - needs to be quite large.

2 Likes

But you can compute the probability of outcomes of various processes.

But the similarities are enough to make it improbable that the camera eye evolved twice.

The vertebrate camera eye occurring separately in cephalopods, it is improbable for the eye to develop once, much less twice.

[quote=“Mercer, post:134, topic:15043”]

No, this only shows that there are subsequent mutations that improve resistance, once resistance is in place.

Only two that seem to be probable enough to occur.

Again, he observed what evolution actually did, starting with existing variation. Thus all that is needed is to observe the rate at which resistance evolves.

Well, again, the most likely estimator for the population mean is just the sample mean.

But all routes were open to evolution, none of them were ignored.

See the opening post, the program that generates a sentence from a random string.

“An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway.” (Behe here)

The problem is that you are painting the bulls eye around the bullet hole, the Sharpshooter fallacy. You are focusing on just one outcome to the exclusion of all other possible outcomes.

What you and other ID proponents should be calculating is the probability of a new mutation interacting with an already existing neutral mutation and producing a beneficial phenotype. Not a specific beneficial phenotype, but any beneficial phenotype.

How many neutral mutations exist in the human population, and how many neutral mutations did humans inherit over the last hundreds of millions of years? How many new mutations will interact with these neutral mutations and produce beneficial phenotypes? It would seem to me that there are millions and millions of potential beneficial interactions, be it within the amino acid sequence of a single gene product, protein-protein interactions, protein-DNA interactions, and even protein-RNA interactions.

In addition, neutral mutations are continually becoming fixed in any given population, and before reaching fixation they can still be found in a large percentage of a population. The fixation of neutral mutations is the rule, not the exception.

3 Likes

But you can view events, which have occurred, either before the fact, or after the fact. Viewed before the fact, an event can be improbable.

So you cannot compute the probability of drawing a full house, viewing this event before the fact?

Well, Behe also examines the HIV virus, where one new protein-protein binding site did evolve, as it turns out. If it’s all that easy to produce a new binding site, then HIV could be cranking them out right and left.

But again, I’m sure if we saw a flagellum-in-process somewhere, we’d hear about it.

Right, I quoted that to show that he does take into account circuitous paths.

Again, Behe is observing what evolution actually did, where all paths were open to evolution.

Viewing the event before the fact, yes. No matter when we pick a specification, the probability, viewed before the fact, will be the same.

Which you could say Behe does, after deriving a rate for chloroquine resistance, Behe then applies this to a hypothesis concerning protein-protein binding sites, and turns to the malaria data set (both with humans and with the parasite), and with HIV, to see how the hypothesis is borne out.

Perhaps you would like to explain where Dawkins does so. The weasel program - which is covered in only a few pages in The Blind Watchmaker - is intended only to show the advantage of what Dawkins calls “cumulative selection” over “single-step selection” (which is essentially random guessing). It is not billed as or intended to be a simulation of biological evolution and Dawkins says that it is not accurate as such.

(It is interesting that criticisms focus on the relatively unimportant weasel program and not on the biomorph program that occupies the bulk of that chapter - and is intended to be closer to biological evolution, though still not a simulation).

4 Likes

That is certainly not a definition of design. Nor can it be considered evidence of design. After all, unselected steps will occur (quite frequently) and evolution is hardly going to care about whether it is including unselected steps or not.

Any argument to the contrary makes the mistake of confusing probabilities before and after the event. A particular unselected step may be unlikely before the event, but that does not mean that unselected steps do not occur nor does it mean that evolution cannot incorporate those that do occur.

2 Likes

Unsupported claim.

2 Likes

Picking a specification with knowledge of the event cannot be done before the fact. As I point out the a priori probability of such a specification being met is irrelevant to choosing design. The probability of the sequence having such a specification is the relevant probability. The probability of a sequence of 500 coin tosses having such a specification is clearly much higher than 2^-500.

2 Likes

The sample mean is not relevant, you are not using a likelihood for estimating a population parameter. The interpretation you are using for the likelihood is not correct.

Routes available to evolution are not part of your calculation, even to name the particular pathway you claim is improbable.
You further presume that all steps are “not selectable” with no justification. You do not consider the probability of “scaffolding” that is known to produce IC structures. You do not consider sexual mixing, which can combine sequences in ways that bypass “non-selectable” steps.

Most of these calculations are not possible outside of special circumstances such as phylogenetic analysis. This is why we don’t see biologists using this approach at all. ID researchers have no special knowledge of mathematics that could make this possible. I am not the first to make these criticisms.

5 Likes

Vision can be of existential importance, and so selection is highly operative. There is plenty of intermediate functionality, so while the camera eye is complex it is not irreducibly so. Form follows function, and function responds to environmental pressure. What would be the barrier to convergent evolution of the camera eye?

4 Likes

First off, cephalopods don’t have the vertebrate camera eye; they have a distinct cephalopod camera eye. Second off, Nilsson D., Pelger S. A pessimistic estimate of the time required for an eye to evolve. Proceedings of the Royal Society of London, Series B 1994; 256:53-58.

Oh, I’d like to point out that some arthropods (and members of several other phyla too) also have a form of camera eye, and some cephalopods have camera eyes that lack some of the components of the canonical camera eye.

6 Likes

ID methods are unique in that the probability of an event-type is considered to be smaller the more often it is observed. Normally we think that events which occur more often are more likely, not less.

6 Likes

Let’s pick 500 heads in a row before the fact.

The probability of a sequence meeting a specification is what is relevant.

But who talks about the probability of having a specification? Probability is about events.

But I was responding to people here defending it.

This is definitely a definition for design, you may say it doesn’t work, but it’s still a definition. But unselected steps are indeed more difficult to get through, and the more such steps there are in a row, the exponentially more difficult it is for evolution to get through them. Let’s say each step is probability .1 to get through, then two such steps would be .01 probability, and three such steps would be .001 probability, etc.