Intelligent design and "design detection"

So, to summarise.

Only Dembski’s argument is an attempt at a general method of detecting design.

Ewert’s argument is the only one to use actual design principles.

Excepting the Fine Tuning argument all the others are just attacks on evolution and even Dembski and Ewert primarily targeted evolution. The attempt to revive gpuccio’s failed argument suggests only that ID has nothing better to offer.

I think we may fairly say that Intelligent Design is not “design detection”. Intelligent Design is opposition to the science of evolution.

7 Likes

No

No. IDers carefully examines all explanations on the table and ranks them for their explanatory power. IOW, IDers use a well known line of reasoning in historical sciences known as inference to the best explanation.

I disagree that Gpuccio’s argument failed.

But none of your excuses work. They amount to bare assertions. Like when you declare with no supporting mathematics that you can estimate FI from conservation. You just say that. You have no basis for that assertion.

3 Likes

Obviously none of the other arguments are general arguments to design. Design does not have to use modules in the way Ewert’s arguments requires, for instance.

I wonder how you can say “No” when it is a fact that almost all ID arguments do target evolution. And none of the arguments listed uses inference to the best explanation. Indeed, ID seems to prefer arguing against alternative views - Dembski’s Design Inference is purely negative, relying on taking design as an unexamined default.

Gpuccio does not measure Functional Information as defined by Szostak. Nor does he really measure conservation, which he - wrongly - attempts to use as a proxy. The data he uses - aside from its obvious problems has such extremely coarse temporal resolution that he can’t argue for functional information appearing “all at once” and needs to establish that the rate of accumulation- a rate he never calculates - is too great. I think that in aggregate those problems must be considered fatal to his argument.

2 Likes

They just never offer a design explanation. Evolution fails therefore design isn’t an inference to the best explanation, it’s missing the explanation part.

3 Likes

I take offense at that, since I have explained multiple ways in which his argument was fatally flawed, some of that in response to direct requests from you. Of course, I’m far from alone in that, but my part is what I take personally. There was no response to my last explanation. Did you miss it or are you just ignoring it? Is it even worth responding to you if that’s how you treat responses?

1 Like

Sorry for not responding to your last explanation but I’am quite busy right now. I’ve nevertheless considered it as carefully as I can and thought you were probably right on this particular point. But IMO, this point is a minor one that doesn’t really affect Gpuccio’s argument, for it doesn’t really matter whether the transition to vertebrates occurred within a window of 80, 100 or 120 my.

In order to answer this question, please consider that, recognising your great knowledge of most of the subjects discussed here and your sharp mind, I always pay close attention to and value what you say. I am grateful for most of the exchanges I had with you here at PS these last years for they have contributed to my education. But that doesn’t mean I agree with all what you say.

I will admit to feeling distinctly “derogatory” about this ludicrously “unsubstantiated comment”. All evidence to date seems to be that “Gpuccio’s argument” is “not even wrong”. Certainly no evidence has been presented linking Gpuccio’s idiosyncratic graphs to Szostak’s definition of Functional Information, let alone to Design – any such linkage appears to be soley by way of unsubstantiated assertion.

That was only one of many points, and you’re wrong even about that one; it’s crucial to his claim of a sudden jump in “information”. The big jump on the graph disappears if you put in the real time scale. He has it happening in something like 10 million years when it’s really more like 150 million, maybe more. Your attempt to radically minimize that difference is noted. Also, that isn’t the difference between vertebrates and non-vertebrates. It’s the difference between sharks and non-chordate deuterostomes, a much greater transition for which we have many living intermediates that he ignores. And don’t forget that his measure of “information” isn’t a measure of information; it isn’t even a measure of conservation.

I’m afraid that you don’t. If you really paid attention you wouldn’t make so many errors about what’s being said. Blind rejection is not paying attention.

2 Likes

Well it certainly matters for how steep the curve that you think implies a “sudden jump”(as opposed to a longer more gradual increase) in “information” looks. But you still don’t have a measure of information.

Sequence similarity isn’t a measure of information. Nor is conservation. The probability of randomly guessing a particular sequence (or a collection of them) isn’t the same thing as the probability of evolving something functional. It never will be. For that reason Gpuccio’s method will always remain a total failure.

1 Like

I disagree. Similarities conserved throughout long evolutionary periods (more than 400 my) denote positions that have been preserved by purifying selection, which reflects very strong functional constraints. So there is a close connection between the level of conserved similarity through deep time and functional information.

But Gpuccio isn’t even measuring that. All he is doing measuring is similarity, not conservation.

That might be true if you were actually measuring sequence conservation and if the sequences you examined actually were an adequate sample of sequence space. But neither of those things is actually true. Any such measure certainly doesn’t match Szostak’s definition, which requires exhaustive examination of both the sequence and the degree of function of all possible proteins.

Of course, you tend to ignore any point I make. But I guess you tend to ignore any point anyone makes.

Sure, it implies those are the positions that have higher fitness.

That just flat out doesn’t follow. For all the reasons stated previously. To which you had no response.

To brielfly reiterate the reasons why you’re not measuring information, and why the inference to design from FI logically cannot work:

  1. Evolution has only sampled a tiny portion of sequence space, so you can’t use conservation to show there is no function out there in very dissimilar sequences(we know of many examples of entirely dissimilar structure-sequence relationships that perform the same functions, such as being enzymes that catalyze a specific reaction).
  2. Different sequences have different fitnesses, so one being discard in favor of another doesn’t mean those lower-fitness sequences didn’t strictly “work”, just that they had lower fitness. A higher fitness sequence can evolve from a lower fitness sequence.
  3. Many different functions overlap in sequence space(all enzymes overlap with simpler binders, for example.)
  4. Ultimately this also comes back to the Texas sharpshooter fallacy, as you’re trying to calculate the probability of randomly guessing a sequence performing a particular function on a particular hill, instead of randomly guessing any new fitness-improving function from the ancestral state.
1 Like

Shall we also point out that Gpuccio and you suffer from human exceptionalism syndrome? By his measure, any human sequence is the pinnacle of functionality (or has the most functional information) and that of any other species is less functional, increasingly so as the date of divergence increases. Is that in any way rational? Do you really think that tunicate cytochrome c is less functional than trout cytochrome c?

1 Like

You can rest assured, Gpuccio doesn’t suffer from human exceptionalism syndrome with regards to his methodology. To see that, I invite you to have a look at his post no 37 in the thread below.

I’m afraid you don’t understand what he says there. His BLAST bitscore is a measure of similarity between a human protein sequence (entered as the search image) and a presumably homologous other sequence (which BLAST finds by comparing similarities in its database). The more similar to human, the higher the bitscore. Thus if bitscore is considered a measure of functional information, the human sequence has the most functional information of all, being 100% similar to the human sequence. It’s possible that Gpuccio doesn’t understand what he did either, but that’s what he describes. Equating this with Szostak’s measure of functional information is pure fantasy.

Why, just look at the graphs. All those supposed increases of functional information just demonstrate increasing similarity to human proteins. That’s all they show.

2 Likes

Agreed. We’ve been through this before with Gil. If there is a “close connection,” then myosin has far less FI than actin, which is absurd.

Gil and Bill also never explained why similarity between species counted but similarity within a species (humans, even!) didn’t. That’s even more absurd.

Agreed. Gil, can you at least concede that?

Gil certainly has ignored every point I’ve made regarding this.

Your conclusion would appear to be overstated. It would appear to be more accurate to state:

So there may be a connection between the level of conserved similarity through deep time and functional information.

As far as I can ascertain, neither you nor Gpuccio have compared your proxy measure to FI calculated using Szostak’s definition – so you and Gpuccio are merely ASSUMING that there is a “close connection”.

  1. Even assuming that there is a genuine “connection” there, you have no way of knowing how much noise there is in the correlation.

  2. You have no way of knowing if the relationship between the two would be linear, logarithmic, exponential, etc – so no basis for assuming that the relationship would be one to one.

  3. You haven’t ruled out what else your proxy measure might be “closely connected” to – so have no way of telling if it is picking up spurious variations that are unrelated to variations of FI.

These are the sorts of things that would need to be nailed down, before Gpuccio’s argument has any hope of garnering any credibility.

Gpuccio made clear several times that he doesn’t estimate the totality of FI, but only the human specific FI. Let me remind you what Gpuccio said at 37:
I am not saying that vertebrates are in any way special. I am not saying that humans are in any way special (well, they are, but for different reasons).

2. It should be clear that my methodology is not measuring the absolute FI present in a protein. It is only measuring the FI conserved up to humans, and specific to the vertebrate branch.

So, let’s say that protein A has 800 bits of human conserved sequence similarity (conserved for 400+ million years). My methodology affirms that those 800 bits are a good estimator of specific FI. But let’s say that the same protein A, in bees, has only 400 bits of sequence similarity with the human form. Does it mean that the bee protein has less FI?

Absolutely not. It probably just means that the bee protein has less vertebrate specific FI. But it can well have a lot of Hymenoptera specific FI. That can be verified by measuring the sequence similarity conserved in that branch for a few hundred million years, in that protein.