I don’t understand why you’d stipulate that each would have to be; @glipsnort’s hypothetical case is much better supported by the data than yours.
I’ve been studying the functions of mutant proteins (both designed and natural) for decades, and I don’t know of a single case in which every base or every amino-acid residue must be specific.
For example, we know from the largest data set–hundreds of trials with catalytic antibodies–that one can routinely find a specific enzymatic activity in just 10^8 random sequences (2 x 110-residue variable domains). As these are severely constrained by having to fit in the structure of an antibody, this is surely an underestimate of the typical target space for your purposes.
You appear to be assuming that there can be only one “target.” The evidence says that such an assumption is untenable.
The other incorrect detail here, is a limitation the target can only be found once, when it should be to probability the target is found at least once. In cases of very small probabilities assumed by ID it’s not a big difference (a factor of 3.14), but for more common features the same calculation can give a probabilities greater than one. Dembski actually gives an example that generates negative CSI, implying he calculated a probability greater than one.
I was referring to the evolutionary algorithm used to simulate the robots walking. However, if I have inadvertently stumbled across a topic worth elevating, I don’t mind!
I think you will find gpuccio answer in the main thread.
But maybe I can give you the answer.
The question is whether the claim « high FI signifies design » is warranted.
The answer is yes, both theoretically and empirically. Theoretically because there is no sufficient probabilistic ressources in the universe to find a target with a FI > 500 bits.
Empirically because there is not a single example that an object exhibiting such high FI has ever been produced without design; each time the cause of an object with high FI is known, this cause is invariably an intelligent cause.
In fact, the statement that high FI signifies design may probability be seen as a scientific law, in par with the second law of thermodynamics!
In the context of @gpuccio main thread, this leads us to the only real question, I. E., does the FI associated with some well defined transitions in life history is high enough to warrant a design inference? For the answer, stay tune with the ongoing dialogue.
I’m sorry but the concept of a “probabilistic resource of the universe” is incoherent. There technically isn’t any event with a non-zero probability that couldn’t happen. That’s what it means to say it has a non-zero probability: It may be very unlikely, but it can still happen.
Another issue is that no attempt is made at evaluating the probability of the design event. When somebody comes to you and asserts that A is unlikely, so B must be the case, you need them to state the probability of B and ask them to show their work. If I am supposed to pick the more likely option, I need an actual probability.
It may be the case that the probability of A happening “by chance” is extremely low, but it may be the case that B happening by design is just as low, or even lower. How do you determine the probability of design? You seem to be assuming it’s high, I want to know how you know that.
A third issue is with the idea of targets that need to be found, as if this particular outcome is the only one possible and it has to be either this one or none at all. As opposed to a situation where a very large ensemble of equally complex but functional outcomes exist, and the one we got is just one among many possibilities.
Empirically because there is not a single example that an object exhibiting such high FI has ever been produced without design; each time the cause of an object with high FI is known, this cause is invariably an intelligent cause.
I raise the empirical counterxample of the complete genome sequence of me, and you, and anyone currently alive, compared to the genome sequence of our ancestor ten generations ago.
It is estimated that every human being is born with approximately 100 novel mutations. What is the probability that any of us gets the particular 100 mutations we do? Assuming only substitutions to keep it simple, it’s roughly one in (3.2x10^9 * 3)^100 = ~1.7x10^998.
Now compound the probabilities ten consecutive generations. (1.7x10^998)^10 = ~1.9x10^-9982
Roughly one in two times ten to the ninethousand ninehundred and eightysecond power.
As should be pretty obvious, unlikely genome sequences evolve incrementally by accumulating mutations. They aren’t created all at once in some giant random assembly. At each generation, the outcomes of mutation are filtered by natural selection. That’s how we can get what looks like very unlikely functional sequences.
In fact, the statement that high FI signifies design may probability be seen as a scientific law, in par with the second law of thermodynamics!
Am I now supposed to think that the universe has some sort of law on par with the 2nd law of thermodynamics that says I can’t have ten consecutive ancestors?
In the context of @gpuccio main thread, this leads us to the only real question, I. E., does the FI associated with some well defined transitions in life history is high enough to warrant a design inference? For the answer, stay tune with the ongoing dialogue.
I think the transition from my ancestor ten generations back is a pretty well defined transition. And the specific probability of my genome compared to that ancestor would have appeared incomprehensibly unlikely to that ancestor ten generations ago. We could imagine having been present back then and asked my ancestor to calculate the probability that he would have a descendant ten generations in the future with those specific ~1000 novel mutations, out of all the combinations of such 1000 mutations that are possible.
He would get roughly one in two times ten to the ninethousand ninehundred and eightysecond power.
Yet it clearly happened. Here I am.
Whether you look at the evolutionary history of individual proteins, or the complete genomes in which the genes encoding these proteins are encoded, you can see how through many iterations, a sequence that looks like it is very unlikely and has high FI could have evolved. It’s really no issue at all.
I am not sure there is an “evolutionary algorithm”…
Whatever happens i a turing machine is controlled, including the feed back systems which play the role of “environment”.
Evolution does not have that (the environment changes randomly). I dont see how it can be seen as an algorithm.
Every functioning human immune system finds a target with FI>500 bits. The target is a set of antibodies that can effectively respond to hundreds of different pathogens. If an analysis concludes that hitting such a target is impossible, the analysis is wrong.
That fails because evolution doesn’t have specific targets.
That also fails because biological systems show such high FI values without being designed. Objects produced with evolutionary algorithms also show the process can increase FI with the amount only being limited by how long the algorithm is allowed to run.
Yes, but it is clear @gpuccio has lots and lots of things he wants to tell us about before he gets to the big reveal, if he ever does. You know and I know what a big disappointment that is going to be. But this should prove to be a very enlightening demonstration of how ID’ers think science works.
Indeed, I have already answered in detail questions #2 and #3. I have also asnwered briefly question #1, but I have to detail. I will do that in next post.
I see three curves, what are they derived from? Are those names of clades on the red dotted line? What’s “cart”? “afro”? “mars”? Should there be a line between them? I’m getting DAP vibes from this.
I get some vague sense that it’s about some proteins that grew in size during their previous 500 million year history in some lineages in some clades. Okay. And?
I will agree you have written lots and lots and lots of words in response to those questions. As to whether those questions have been answered, I suppose that is up to the individual reader to decide.
To his credit, he has willingly painted himself into a corner in which his claim could easily be refuted in no uncertain terms. ID Creationists are not usually so willing to do this.
The real test, of course, will be his response to when the refutation is made…
If any of you scholarly types who are allowed to comment is paying attention here, I would like to know why gpuccio is using the crude measure of pairwise BLAST scores rather than actual reconstruction of sequences on a phylogenetic tree.
We asked him this too. We are waiting for him to respond. I’m going to give this some more time to play out and then summarize the issues. This will be on the list for sure.
This is a very strange condition upon which to insist. The issue is whether evolution (which produces only “biological objects”) can produce 500 bits of FI, or whether this can only be produced by “design.” No one is suggesting a third option. So even if you rule out such 3rd options, that does not help resolve the central question in any way.
e.g. If it was demonstrated that the bacteria in the Lenski long-term evolution experiment produced 500 bits of new FI, this would not meet the criteria you are demanding. Yet this would be a clear refutation of your “design” argument.
I disagree. The search space is going to be dependent on the pre-existing genome, and we simply don’t know enough about genomes 400 million years ago to do this, at least from what I can see. You seem to be assuming that functional sequences emerge de novo from previously non-functional DNA, but this certainly doesn’t have to be the case. There could have been pre-cursor genes that had functional tertiary structures that were then modified through subsequent mutations. That pre-cursor gene could have been lost to history. If there is billions of years of evolution that led to a gene that was one mutation away from finding a novel function, then you would need to factor in those billions of years of evolution into your calculations.
Those approximations are wild guesses, at best. On top of that, your target space needs to incorporate all functional space, not just the function that did evolve. Nature is full of species that found different strategies for adapting to environmental challenges, and I would suggest this extends to the molecular level. For example, intron splicing is just one possible function that could have arisen. A completely different method for dealing with misfolded proteins could have emerged.