What A Darwinian Algorithm Designs

Back in 2016, @Eddie gave a challenge, and I answered in a post I want to recover here, continuing the conversation with @EricMH, one of Dr Robert Marks’ PhD students (of the ID movement). @Dan_Eastwood you might want to see this too.


[original post here: Intelligent Design - #131 by Swamidass - Faith & Science Conversation - The BioLogos Forum]

What A Darwinian Algorithm Designs

Before diving into this, I should emphasize that in Biology we do NOT think that randomization + selection is enough to explain the full diversity of life. That simple version of Darwinism was falsified a long time ago. It turns out other mechanisms are quantitatively more important.

[see The Neutral Theory of Evolution and Cancer and Evolution for more information on non-Darwinian evolution]

Still, this is a very strange request. Isn’t the answer to these questions well known?

Is there anyone out there who has, in a professional capacity, actually written an evolutionary algorithm or Darwinian algorithm, who will take the personal responsibility (sans Wikipedia or other internet links) of defining or explaining these terms? And of providing a very clear example where, say, a toaster designer, or a designer of a new hypodermic needle, was totally stumped until he/she programmed a computer to use an evolutionary algorithm, and thus came up with a better toaster or needle?

So this is exactly what I have my PhD in @eddie. One example that we talked a lot about during my undergrad was the use of genetic algoirthms to solve difficult problems in design (e.g. aircraft wing design http://flab.eng.isas.jaxa.jp/member/oyama/papers/SMC99.pdf which has been recently updated http://enu.kz/repository/2011/AIAA-2011-5881.pdf). You can read about this here on wiki Wing-shape optimization - Wikipedia.

More close to home, one of the most widely used drug design software packages: DOCK from UCSF, used a genetic algorithm to fit molecules in protein pockets http://dock.compbio.ucsf.edu/. This program wasted about 2 years of my life in graduate school (for genetic algorithm unrelated reasons).

Now, of course, I looked on the internet to get those URLs, but I knew of these papers before searching. Remember, I am a professor and this is squarely within my domain of expertise: machine learning and artificial intelligence applied to biology.

Right now, genetic algorithms (i.e. Darwinian algorithms) are not used frequently because we have better algorithms, like stochastic gradient descent and simplex optimization. GA are most useful when these other methods do not work so well (because analytic gradients are not available or gradients are bad hints) and/or massively parallel resources are available (because genetic algorithms are trivially parallelizable). That being said, in machine learning, it is generally accepted that genetic algorithms are extremely effective in most problems (especially if there are multiple solutions), but have two limitations:

  1. They are too slow to be preferred over gradient descent based methods when only one processor is used. A true “evolutionary” algorithm (remember) would automatically scale processing power to deal equally fast with a population of a million and of one hundred (generation times are the same no matter how large the population). In simulation, however, generation time scales linearly with population size, so this makes GA very slow in practice.

  2. They are are good at getting good solutions, but sometimes struggle in the last steps to get very high precision solution. This is one way in which simplex search does better than GA, because it automatically scales down its steps as the right answer is found.

In context of evolution, neither of these limitations is relevant. In biology, generation time is independent of population size, approximate solutions are fine, large populations improve the search well beyond what is possible with a computer, there are multiple solutions, and the fitness landscape might be rough with little information. The biology design problem is different than the human design problem. GA is much better for biology than it will ever be for human design problems.

Of course, nothing I’ve written here is new knowledge. It is obvious to anyone in the field. I thought it was well known, even in the ID community. Even Dembski’s work references this at times.

I know scores of ID proponents who are in the software business, many of them having Ph.D.s in electrical engineering, computer programming, etc. They are all skeptical that any genuinely Darwinian programming can produce anything useful.

Of course they will be skeptical. This doesn’t make them right. And I have no idea what “genuinely Darwinian” means. The algorithm is randomization + selection → design. If that isn’t Darwinian, I do not know what is. It is not an elegant solution, and it can be slow to use in a computer (because it scales linearly with population size) but it is very very effective. And biology does not even rely exclusively on this strategy.


[The follow clarification: Intelligent Design - #165 - Faith & Science Conversation - The BioLogos Forum]

Eddie’s Clarification

Clearly the ID people acknowledge the existence of genetic algorithms, and they also acknowledge that these can be useful. So where I expressed doubt that genetic algorithms could do anything useful at all, I was misstating the ID folks’ position. However, I was correct in saying (in a clumsy, unclear way) that they have doubts whether the most successful genetic algorithms are “purely” Darwinian, i.e., they have doubts that the successful genetic algorithms completely eschew “tuning” of any kind. But I don’t want to make that point myself, lest I stumble in the expression. I will cite one article as an example of the ID critique of genetic algorithms, in which the virtues of the algorithms are granted, but cautions regarding some claims for them are expressed:

http://www.bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2012.1/BIO-C.2012.1

I do not expect you to agree with the conclusions of the article, but at least the article should make clear to you some of the ID remarks on genetic algorithms that I was conveying in an inadequate and unclear manner.


Continue the Conversation?

The paper quoted here is by our friend @Winston_Ewert, a friend of @EricMH. Also, @EricMH has argued that:

As I understand it @EricMH’s usage of these terms (which may be idiosyncratic), the examples of evolutionary algorithms I put forward shouldn’t be able to work. I’d like to hear how their theory makes sense of the these results, where darwinian algorithms are able to design things to difficult for humans to design on their own.

@eddie, for the record, thanks for issuing the challenge. I still remember this all these years later.

2 Likes

Of course, here, GA = Genetic Algorithm, not Genealogical Adam. :stuck_out_tongue_winking_eye:.

2 Likes

It’s simply the engineer adds the necessary mutual information to the genetic algorithm.

So you are going to have explain that more. It’s clear you are meaning something different than us, and this is merely a simulation of natural section, with no apparent cheating. It seems the “requirement” is specified at the level of selection, without a specific sequence target. That means these experiments demonstrate selection are one way to generate your version of “MI” (what ever that is).

The selection function is certainly an example of the engineer adding mutual information to the search process. Some selection functions help find good solutions, others hinder finding good solutions.

In this case the selection algorithm mimics something that happens in nature. You have to demonstrate precisely how the analogy breaks down so as to limit our logic. Essentially I’m claiming that selection can mimic intelligence, and I’m presenting direct evidence to this effect with simulation.

Also, we see this in nature too with cancer.

1 Like

Can I ask a clarification for the usage of Mutual Information in this context? MI isn’t a thing to be generated, but a measurement of common information between (in this setting) a parent population and subsequent generation population.
What exactly is meant by “generating mutual information?”

PS: Perhaps this should be split out for side comments.

I’m thinking of correlation between DNA and some animal functionality, in this context. Randomly generated DNA isn’t going to turn into anything, just like randomly generated bits won’t turn into a useful program. It takes a whole lot of careful work to craft a program from bits, and this is creation of mutual information: correlation between a string of bits and some useful computation.

1 Like

There certainly can be residual mutual information in nature, but evolution cannot increase this amount, per the law of information non growth. So, the question becomes: what created the initial mutual information?

Only a halting oracle can overcome the non growth law, so that must have been where the initial mutual information came from.

As far as we know, all physical processes are Turing reducible, so this means the originator of the mutual information must be non physical (or our understanding of physics is radically wrong).

How did you determine that random DNA sequences can not produce function?

On top of that, much of evolution involves the modification of already functional DNA, not random strings of nucleotides.

If you scramble some organism’s DNA, what happens?

You tell me. You are the one who seems to be saying that random DNA can’t have function. I was wondering if there was any science to back that up.

1 Like

It dies.

Thank you, I think I understand you now. I prefer to think of this in terms of Kullback-Leibler divergence, or the difference between (from your example) DNA and some [animal] functionality, but this is two sides of the same coin. Almost.

But now we have some disagreement …

I agree random bits won’t turn into a useful program, but neither will a blank page (all 0 bits). I could also rearrange the bits of functional code in a way to destroy it’s function without substantially changing it’s information content. Likewise I could increase the information content of code by adding random bits, also destroying function.
“Function” can only exist somewhere between the extremes of maximum and minimum information. Therefore I suggest K-L divergence or maybe Hamming distance would be a better measure of “distance to function”. I think Mutual Information is the wrong measure to use, because it’s not a matter of adding information, but of decreasing the distance between the current information and “function”.

1 Like

Mutual information is a K-L divergence.

MI is significant because it cannot be generated by randomness + determinism (see section 1.2 of this paper by Leonid Levin), which is the ID argument.

The bigger point is the ID argument is mathematically proven. The controversy is artificial.

Sounds like random DNA doesn’t have a function, then.

It is equivalent, but does not have the same directionality in terms of increasing or decreasing information in a population. It appears you are equivocating Mutual Information and Relative Information. If you insist on MI, then my counter-examples given above hold, and your interpretation cannot be correct.

For parent/child populations X and Y, given NO CHANGE between generations then, I[X] = I[Y] = I[X|Y]. That is the information in X and Y is of equal measure to the MI. But for some change T(Y) Information Non-Growth implies …

I[X; Y] \geq I[X; T(Y)]

… and equality only for T(Y) = Y.

But this does not hold for Relative Information. Let I[F] be the information enabling some functionality F, then with NO CHANGE …

D_{_{K-L}}[X || F] = D_{_{K-L}}[Y || F]

but with some change T(Y)

D_{_{K-L}}[X || F] \neq D_{_{K-L}}[T(Y) || F]

… except for the trivial case of T(Y) = Y ***, and we cannot say whether I[T(Y)] has increased or decreased relative to I[X].

*** [EDIT: Not quite correct. There will be a set of changes T^* equidistant to F such that …
D_{_{K-L}}[X || F] = D_{_{K-L}}[T^*(Y) || F]
…including T(Y)=Y. We still cannot say whether I[T^*(Y)] has increased or decreased.]

I agree this is well established mathematics, but the interpretation you are assigning to it is not correct.

addendum: Typos corrected, and sorry if my notation is non-standard, but I’m new to Latex.

3 Likes

Perhaps it would if the random DNA has a sequence in it that codes for a protein that the cell uses for some function.

I’m referring to your first example. If I(X:Y) is the algorithmic mutual information, function F describes some complex functionality, X is current state, E is evolution process, then I(X,E:F) \geq I(E(X):F). So, evolution cannot generate any mutual information that is not already implicit in the beginning. Perhaps everyone agrees with this point, but then we have an infinite regression of trying to explain where the initial mutual information came from. This initial source cannot itself be chance + determinism, thus must be something like a halting oracle, which per our current understanding of physics must be a non-physical entity. Evolution supposedly solves this problem, but all it dose according to AIT is just push the problem up a level.

Not true @Patrick. In every sperm and egg DNA is scrambled. This usually does not result in death. In fact, it functions just fine.

1 Like