Intelligent design and "design detection"

Of course, she has nothing to say about the manipulation Doug Axe used to get his desired result in the beta-lactamase experiment, does she?

1 Like

I would not characterise Gpuccio’s argument, which you quoted below this, as a “weird” argument so much as a “not properly filled out or developed” (i.e. “vacuous”) argument, in that he provides no substantiation for his ‘inference’ of “design” from “500 bits of specific FI”.

Lacking such substantiation, this would seem to be nothing more than an Argument from Personal Incredulity – Gpuccio doesn’t understand how this Functional Information could evolve, so it could not have evolved.

Such unsubstantiated inferences seem rife within ID argumentation, leading me to the conclusion that:

ID does not detect design, so much as unconsciously detect its own advocates’ prejudices and preconceptions.

2 Likes

I was responding to your claim that « He’s not really measuring FI, just similarity, and since extant species-specific sequences evolved incrementally from ancestor-specific sequences, he can’t justify any conclusion that the extant sequence couldn’t evolve. »
And indeed, Gpuccio doesn’t claim that extant sequence couldn’t evolve nor that they couldn’t have evolved from some ancestral forms. Rather, his claim is that a transition involving the addition of 500 bits of FI always warrants a design inference. This is because the probabilistic resources of the whole universe are insufficient to find a target exhibiting 500 bits of FI from an unrelated state.

EVIDENCE supporting this ludicrously grandiloquent claim?

Evolution isn’t about finding “targets”.

Are the “probabilistic resources of the whole universe” sufficient to find something “exhibiting 500 bits of FI”?

Weirdly you go on to directly contradict this. He just says you can’t evolve extant sequences that he somehow mistakenly infers to be extremely rare. Make up your mind.

Ok so he doesn’t claim it can’t evolve, but he does claim it can’t evolve, because reasons. Why this silly self-contradiction here?

In any case, he assumes you have to sort of guess an extremely rare target in some much more limited number of guesses instead of being able to evolve towards it under purifying and positive selection.

As explained now dozens of times by multiple people, he has no way of actually measuring the FI of the “target” despite all the failed arguments to the contrary(no, conservation isn’t a measure of FI for reasons already explained), and he is implicitly assuming there is no pathway to the target from something else that is simpler/more likely.

To use Szostak’s figure with the cone intersected by the plane, Gpuccio is basically taking things at the top of the cone and setting the threshold for function there, such that everything below it (things discarded by purifying selection) is assumed basically nonfunctional.

That’s what he’s doing by doing sequence similiarity measures and/or conservation, with one residue at each position being conserved assumed to be the only functional one, and variants not found assumed to be nonfunctional. By doing this, he’s moving the threshold for function up in a way that makes the entire cone a smaller and smaller fraction of sequences space.

He doesn’t allow for the possibility that unseen variants are selected against because they have lower fitness but are still part of the cone of functional sequences, and he ignores epistasis as an explanation for why they got “locked in” and can’t further change. This is how he gets his high bitscores, with an insane methodology built on flawed assumptions that are diametrically opposed to how we understand the biochemistry of protein functions.

That’s bad and wrong already. But then it gets much, much worse.

Then he completely discounts the possibility that the sequences that make up the cone overlaps some other larger cone defined by performing some other, simpler function. If the extant function is an enzyme, for example, then sequences that can bind the substrate without catalyzing a reaction are implicitly assumed to not exist. All enzymes are binders. It’s intrinsic to how they function. Binders are simpler than catalysts, they’re more frequent in sequences space. The constraints on catalysts are lower than mere binders. So he’s not measuring FI, and even if he was he can’t use it to rule out evolution even if he gets very large bitscores.

2 Likes

But you, as a designer, can just know how to find such a target right? I mean you’re a potentially intelligent designer. So if I specify some highly complex function we would expect to be exceptionally rare in sequence space, you can just sort of know how to make it?

That is what the inference to design is supposed to be. If it’s very difficult to just guess it randomly, and if there’s no pathway to it by incrementally guess more and more close versions, then we infer “design”.

Cool. Let’s do that then. Show that design can do this.

This sequence space problem with very rare targets can be solved with your divinely created mind that can just sort of know stuff magically by some sort of revelation? You don’t have to like, learn stuff incrementally by trial and error and all that.

Then guess me a 10 digit password please. I’m going to be generous and say there’s one million different versions I’d accept. Literally one million different 10-digit passwords made of the letters of the English alphabet. Any one of these 1 million versions of the password I will accept. And no random guessing, and no incremental evolution stuff.

Just do your magical design instantiation. Go!

1 Like

To repeat, from 18 days ago in this thread

I was one of many who participated in the long and very tedious discussion of gpuccio’s functional information criterion. In fact I wrote two posts at TSZ (at http://theskepticalzone.com/wp/does-gpuccios-argument-that-500-bits-of-functional-information-implies-design-work/ and at http://theskepticalzone.com/wp/does-gpuccios-150-safe-thief-example-validate-the-500-bits-rule/) that led to discussion on that. In the end we found that *unlike Szostak* he was assuming that all sequences that had less functionality than the one which we saw, were so much less functional that there was no evolutionary path to the observed sequence.
Technically Gpuccio is not assuming that all those sequences have *no* function, but he is an extreme skeptic of the ability of natural selection to accumulate 500 bits worth of functional information. In all such cases.
2 Likes

Yes, but that ‘skepticism’ seems to drop out of thin air, without any apparent substantiation or context.

By way of context, do we have any idea of the range of FI a single mutation can add for instance? 500 bits does not intuitively seem like a large addition, when you factor in that it could be added in piecemeal via multiple mutations across an organism’s entire genome, over millions of years.

Bingo, you got it! Gpuccio’s 500-bit criterion for Design detection amounts to the assumption that evolution could never do that.

3 Likes

And the fact that he chose 500 bits, not 499 or 501, tells you how much thought he put into it.

1 Like

And if we look at proteins with plenty of polymorphism, such as the beta-cardiac myosin heavy chain, the shape isn’t even a cone–it’s a plateau. This shows that gpuccio’s use of only the consensus sequence for a whole species is just ludicrous.

2 Likes

I suspect he did that because -log2(1/10150) = ~500 bits, and according to one of Bill Demski’s really silly calculations 10150 is “the probabilistic resource of the universe.”, which is supposedly something like the total number of random guesses that could have occurred in the known history of the universe. So if you have 10150 random guesses, you can be expected to guess something no more rare than 10-150. It’s one of the most nonsensical things I’ve ever seen.

I don’t think we should confuse Szostak’s FI “cone” concept with the shape of the fitness landscape. Things that have higher FI don’t necessarily have higher fitness.

I think FI and fitness can be correlated, but they definitely aren’t always. It’s just that we can explain how something that is higher in terms of FI can be reached if, for example, more specific and active enzymes can be evolved from less active and less specific ones if more specificity and activity is beneficial.

There could in principle also be other correlations of FI that don’t necessarily have to do with activity and specificity. There might even be mutational biases that tend to push sequences into higher FI zones, without this also coming with increased fitness.

Agreed. The plateau shape would apply to both. The fitness plateau would vary more by individual because of epistasis.

That made me curious what the variant table for that protein looked like over at Ensembl. I believe you are referring to MYH7, so I went with that protein (1935 aa and 223 kDa according to NCBI reference sequence).

Using a quick python script, it looks like there are 2086 unique amino acid differences at 1291 loci (i.e. there are multiple known mutations at single locations). 6 of the mutations are found in more than 1% of the population in the 1000 Human Genomes project. Most are low frequency that were captured in other genome surveys, which isn’t surprising. Nonetheless, there is a ton of variation in the gene in the human population in what I am assuming are living human beings. I have no doubt that some changes are related to known disease states, but again, that isn’t surprising.

In the few genes that I have looked at this is often the case. There’s tons of low frequency (low penetrance?) variation in human genes. One has to wonder how Gpuccio and others deal with this.

2 Likes

Yup. It’s amazing given its importance and functional complexity. There’s likely a lot of evolution still going on.

But again, most of the disease-causing variants have very low penetrance. We’re only looking at disease and not looking for increased fitness.

They ignore it, of course.

I think the reason it’s a round number is that he (and like-minded IDers) think it’s such a big number that it really doesn’t matter whether it’s one smaller or larger – it’s the difference between really impossible and really really impossible. It’s the difference between “the probabilistic resources of the whole universe” being “insufficient” and those of a slightly smaller or larger universe being insufficient.

Their mental block against this being even conceivably possible appears to be so strong, that they seem unable to conceive that people might demand substantiation or evidence supporting their assertion.

Well, he’s absolutely correct about the people he wants to convince not needing any evidence. @Giltil certainly needs none and he is probably typical.

I guess you think that purely natural processes can produced objects exhibiting 500 bits of FI, don’t you? Well, what are your evidence for this?

If natural processes can produce any FI then it can accumulate over time. There is no good reason to postulate an arbitrary limit. Even Dembski would readily admit as much.

1 Like