They also have concede that our counter examples invalidate the point. This is not true for all algorithms.
When constraints are fixed and the problem domain is of tractable dimensions, evolutionary algorithms do tend to reach local optimization maxima.
Meanwhile, in the real world, many different populations are “exploring” the problem space, and the constraints are always changing if only because many of the constraints are the result of the interactions between populations. As constraints change, the maxima change. Thus evolution has been continuing since life appeared.
In some situations the constraints may change more slowly than in others, leading to episodes of slower evolution. But the only constant is change, as the old saying goes.
EDIT: In the real world, the dimensionality of the problem domain is vastly larger, hence far less tractable.
What you are describing is called open ended evolution. No one has been able to model this in EAs yet. What this means is that the algorithms in EA do not represent or are not powerful enough to achieve open ended evolution.
It’s assumed that “evolution” has achieved all novelty in life… the problem is that the mechanisms or algorithms is not completely defined.
Think of binding as a continuum, not a step function.
This claim does not make any sense.
But what I do with water molecules is what you do with amino acids when you estimate FI in proteins, and moreover exactly what you do when you estimate the probablistic resources in the biosphere (as in here). There is no conceptual difference.
What is the sequence of water molecules you are observing? What is the sequence you are comparing it to?
Exactly, and even when they bind to each other it is even said that the molecules don’t spend the entire time in a bound state, and are only loosely associated with each other as they jiggle around in brownian space. We have this tendency to see things like they are macroscopic objects. When I put a magnet on my bridge it stays there and doesn’t move at all. It’s just not like that at the atomic scale. Molecules jiggle around each other much more loosely, even down to single electrons spending more time around some atoms than others.
It’s not the sequence, it’s the numbers of ways that the constituents (amino acids in proteins, water molecule in tornadoes) are constrained in order to fulfill a particular function (enzyme activity, or tornadoes. The sequence calculations ID proponents use are just this - the only difference is the nature of the constraints and the relationships between constituents.
Your point is well-taken, but I am not sure this is accurate. I believe that some work has been done with EAs over a slowly changing constraint landscape, but I will need to check the literature and get back to you. Perhaps someone else is more up-to-date on the literature and can help us.
Evolutionary biologists definitely do not make this assumption. I am puzzled as to why you would assert otherwise.
It has been approximated. No one in any branch of science is able to give perfectly precise mechanisms or algorithms.
What gpuccio was arguing is that the cause of the constraints could be just a few factors.
The molecular micro states are not static. They are not acting as information that can be repeatedly accessed.
As a result you cannot compare one micro state to another as gpuccio is comparing protein sequences and thus can estimate FI.
All this being said I think you have posted a very interesting challenge
ok. i was not aware about that. i also hope that i got your explanations. but in that case i have a question; why they conclude that according to their result one out of 10^11 proteins will bind an ATP if any protein can bind to ATP? (or in their own words: "This selection yielded four new ATP-binding proteins ") im prreti sure that the other proteins that gave them a weak binding will not work in vivo. am i right?
“Hubert Yockey has done a careful study in which he calculated that there are a minimum of 2.3 x 1093 possible functional cytochrome c protein sequences”-
so it is clearly about function. not about sequence.
yep. my mistake. i was refer to the number of possible cytochrom c sequences of course. and since there are about 10^93 possible sequences that can perform that function out of 10^130 possible combinations, the chance to get that function again is 10^37.
i do remember that they are coded by the same genes. i dont think that its so relevant since even these 10^93 sequences of cytochrome are not identical in their structure.
i do think that its the norm. remember that with the few examples we have so far we got these high numbers. so its clearly not an exceptional and represent the rule.
No, it is not assumed. It is demonstrated beyond reasonable doubt by the empirical evidence, according to the standards of science. That an “algorithm” of the sort you demand is required is just a rule you’ve made up to try justify your denial of this fact.
I think he meant to say that the novelties currently known from the diversity of life are all thought to have evolved. Not that extant life has evolved all possible functions.
Of course, it’s also not assumed. Biologists have good reasons for thinking that.
This was originally a side question about EA’s generating FI and what constitutes a sequence, which led to a discussion of Turing machines and sequences. This is an important topic in computer science theory and Information Theory, but not crucial to the current discussion so long as we all understand what is meant by a “sequence”.
If you are bored you can follow this link to the source and chase the comments full circle.
In the sense of computation, evolution is performing the same sort of calculations as any (formally: evolution is “Turing Complete”). We don’t know what evolution is computing, other than “life” in a very abstract sense. Please don’t ask me to defend that because I wouldn’t know where to begin!
@colewd, I am not sure you really understand what @gpuccio does when he estimates FI. “Cause” does not enter into things. @gpuccio counts targets and calculates ratios - this is just what I did with the water molecules in a tornado. Or, @gpuccio counts states, which is also what I did with the water molecules in a tornado. You are in essence claiming that FI cannot be calculated in the ways @gpuccio does.
I think you have valid argument relative to the strict sense of functional information.
One of the differences here is that the amino acid sequence is directly tied to function and as a result a direct cause.
The various micro states of atoms may not matter and are caused by known environmental conditions.
Gpuccio could simply modify his argument to 500 linear sequential bits. Enabling that correction is very valuable.
Some good suggestions here, @colewd.
Recall that this example came to mind because of a claim that no example of FI > 500 bits has been seen outside of living things. Your suggestion would make that a moot point, since something like “linear sequential bits” is really only relevant to genomes. That may seem to be confounding to @gpuccio, but actually is a positive development, since it removes from this discussion the weak argument (only intelligence is known to create FI > 500 bits) that brought us here. This isn’t a positive case for design, it is a very weak appeal to ignorance. ID proponents are well-served to cast it aside.
There is an additional “calculation” of @gpuccio’s that needs to be discussed. This is the one where he estimates the probabilistic resources of the biosphere. I criticized this for other reasons, but @gpuccio’s calculation is pretty similar to the one I used for the states of water molecules in tornadoes. If, at the end of all this, @gpuccio discards this additional calculation (which, in the bigger scheme of things, is not really necessary ), then all of this will have had a positive outcome, and helped to direct ID arguments to more productive directions.
Its also relevant to human design artifacts like computers etc. We know minds can create 500 bits of linear sequential information.
Your point was very valuable in thinking about the overall design argument. The idea of the forces of nature creating FI is intriguing.
Pls have a look. As far as I can determine, a lot of theoretical work is done on open ended evolution, but no one has “cracked” it yet when it comes to making an EA that is open ended.
People on this forum are exaggerating what an EA can achieve.
If evolution describes how life moved from a single cell to what it is currently, then obviously all the novelty observed in living organisms is brought about by evolution.
Why do you think biologists are not claiming this?
The way I understand it, an algorithm models a theory. If the algorithm can achieve or predict what happens in real life accurately, the theory is good. The difference between what the algorith can do and what happens in real life is the gap between the theory and reality.
In evolutionary science, novelty is brought about through variation and natural selection. EAs work on the same principle. It’s significant that EAs cannot achieve the same creative power that is observed in nature.
This could be simply because computers do not have the kind of calculating power that nature has… or there is more going on in nature than what is modelled as variation +NS.
However, I am surprised that people here can’t even accept the possibility. It shows a kind of dogmatic commitment.
To be frank, I don’t get how anything can be “calculated” without a mind. Even a computer which is mindless calculates stuff because human beings programme it to.The better word would be “accidenting”… or perhaps “mindless random God knows what”.
It takes a mind to calculate. So the analogy of “computing life” doesn’t really work unless someone is doing the programming IMO.