A Complexity Analysis of Evolutionary Algorithms

Science

(Dan Eastwood) #1

I’m out of my depth in the biochemistry, but combinatorial problems I can address. There is a computer science paper showing that Genetic Algorithms are very efficient, O[n\log(n)]. which is about as efficient as it gets. I will start a new topic and add a link here.


Gauger and Mercer: Bifunctional Proteins and Protein Sequence Space
Examining "Signature in the Cell"
(Dr. Patrick Trischitta) #2

this is a great topic to discuss and to tie in with entropy and information theory. I am sure the genetic code is optimizes some type of energy conservation. I am not remotely knowledgeable in this area but would love to here was has been done in this area.


(S. Joshua Swamidass) #3

Give us the story.


(Dan Eastwood) #4

You guys are running ahead of me! Here is the paper:

Explaining Adaptation in Genetic Algorithms With Uniform Crossover: The Hyperclimbing Hypothesis

Abstract:

The hyperclimbing hypothesis is a hypothetical explanation for adaptation in genetic algorithms with uniform crossover (UGAs). Hyperclimbing is an intuitive, general-purpose, non-local search heuristic applicable to discrete product spaces with rugged or stochastic cost functions. The strength of this heuristic lie in its insusceptibility to local optima when the cost function is deterministic, and its tolerance for noise when the cost function is stochastic. Hyperclimbing works by decimating a search space, i.e. by iteratively fixing the values of small numbers of variables. The hyperclimbing hypothesis holds that UGAs work by implementing efficient hyperclimbing. Proof of concept for this hypothesis comes from the use of a novel analytic technique involving the exploitation of algorithmic symmetry. We have also obtained experimental results that show that a simple tweak inspired by the hyperclimbing hypothesis dramatically improves the performance of a UGA on large, random instances of MAX-3SAT and the Sherrington Kirkpatrick Spin Glasses problem.

Basic summary:

Genetic Algorithms are efficient at solving complex problems. The notation O[nlog(n)] reads “Big Oh of times n log(n)”, which describes the number of operations needed to complete a task to an order of magnitude. for example, a simple bubble sort is O[n^2], and the time needed to sort n items grows “on the order of” n squared.
The time needed for GAs to solve tasks grows much more slowly: on the order of n times log(n), which is just a bit more than linear growth.

Combinatorial problems like the number of arrangements needed to discover new proteins tend to be O[n!] (n factorial) when we try combinations at random, but GAs can complete these tasks orders of magnitude more efficiently than random search.

Disclaimer: This is a CoSci paper, not a biology paper. Some of the things the author does to tune the algorithm may not apply in biology. This effects final optimization, but doesn’t have much effect on efficiency (to an order of magnitude). I assert that biological systems do not need to be optimally efficient in order to function.

edit: typo in formula corrected.


(Dan Eastwood) #5

For a simple example, consider a combinatorial problem on 42 items; 42! is about 1.4 \times 10^{51}. A computer take a log time to explore all these combinations randomly or iteratively.

We expect GAs to be able to solve this much more efficiently; 42 \times log(n) is about 68.

The efficiently of GAs also depends on the effective size of the population. I wasn’t planning to go into those details, but it may come up in discussion.


(Retired Minister) #6

I wasn’t aware of that! It makes sense but it is still kind of amazing.

That Omega-notation really brings back memories. Long ago I taught analysis of algorithms and I still recall how I had to prepare a lot of “remedial math” handouts for the students who had no basic grasp of the relative “speed of growth” of various mathematical functions. [How can one analyze algorithms if one, for example, doesn’t immediately recognize that f(N^3) grows more rapidly than f(2N^2)? And Omega-notations made many of the undergrad students space out, especially when a logarithm was involved. The Bell Labs engineers in my classes tended to do fine but so many of the others lacked solid high school Algebra II proficiency.]

Mod edit: quoted typo corrected too.


(Dan Eastwood) #7

I should probably state this assumes sufficient computational resources to explore the parameter space. A GA running with a population size of one isn’t going to be very efficient, while a population size of 1000 is more than enough for most problems.

Stray thought: I bet there is a connection between the efficiency of GA search and the entropy of the population!’’

It’s definitely amazing. I am automatically suspicious when people start throwning around arguments about the impossibility of evolution, because there is rarely any effort to actually characterize how evolution functions. It’s also more math than most people are familiar with, and they may not even recognize when they are making assumptioons, much less incorrect assumptions.

When did you teach at Bell Labs? My father worked there for a time in the 1960’s. He was a physicist working on computer science problems.

Brag: My Dad and Doug McIlroy invented macros.


(S. Joshua Swamidass) #8

I have a couple questions @Dan_Eastwood.

  1. What is the complexity in time if we note that individuals can be processed in parallel? Does it reduce to time O(log N) if we allow processing of individuals in parallel?

  2. What are the assumptions or conditions required to make this proof? What class of problems does it apply to and fail on?

  3. Can you produce demonstration code?


(Retired Minister) #9

Yes. Many people react from uninformed intuition (which they nevertheless trust). I once denied evolutionary processes. I didn’t understand them. And if one has limited understanding, the idea of such “basic” processes producing the amazing things we observe seems incredible. Yet we live in an incredible universe which often amazes us. My lifetime educational experience has brought me to a degree of empathy for the scholars of past centuries who had to grapple with the idea that natural processes like gravity explained the motion of heavenly bodies instead of simply assuming “God commands angels to push the planets in their circuits.” That change of mindset was a difficult one for many. Yet, eventually theologians realized that there was no conflict between a sovereign creator and a orderly world of natural processes.

I hope we never lose our amazement as we observe natural processes. Evolution is spell-binding to observe! If I didn’t have personal experience with GA/EA (Genetic Algorithms/Evolutionary Algorithms), I think I might have had a more difficult time grasping how evolutionary processes in nature could build such amazing structures. There’s something almost “magical” about writing an algorithm with just a few basic “rules” within it and sitting back and watch it build efficient solutions to complex problems that I otherwise could not imagine solving by my own conscience “intelligent design”!

If I were to return to the classroom and had a chance to teach evolutionary biology, I think I would make intensive use of some of the great interactive animations where students can experiment with their own evolution scenarios.


(Retired Minister) #10

I don’t think I’m understanding that question. Obviously, if we take an O(log N) algorithm and multiply it by N, it is still an O(log N) algorithm.

So I assume that I’m not correctly understanding your meaning.


(Dan Eastwood) #11

Please accept that I can’t answer these questions in full today. Apologies for the hand-wavy.

A1) I think parallel processing is inherent in any population size greater than one, and my first thought is NOT to expect further reduction. However, if we allow the population to grow into all positive fitness space, and allow complete sexual mixing (Uniform Crossover in this paper), then we might see improvement towards O(log(n)). At this scale of parellel processing multiple mutations become a significant influence too. Best guess: Given reasonable limits on total computation for each generation, it would be hard to reach O(log(n)), but O[k\times log(n)] for some constant k may be possible.

A2) I will try to pull those assumptions out of the paper. In general, GAs are particularly good at problems that may have local maximums. GAs will be inefficient for problems where the derivatives can be evaluated, because they cannot used that information to “jump” towards optimal solutions.

A3) Not quickly, but it would be a great to have a working demonstration.available here. Let me think on it. Meanwhile: Boxcar2d.com and rednuht.org/genetic_cars_2/ offer simple demonstrations,and http://www.itatsi.net/rules.htm has a GA-based word game you can play.


(Dr. Patrick Trischitta) #12

The Bell Labs engineers were always smarter. :sunglasses:


(Retired Minister) #13

Those are indeed classics. However, do you know of any “ecological” simulations which do a good job of illustrating evolutionary processes? (I’ve even wondered about a series of simulations that start with something extremely basic like Conway’s classic Game of Life and work their up. A lot of students need that kind of slow start and gradual adding of complexity to really grasp how powerful simple rules can be.)

I just felt bad for them because many had worked a 9 to 12 hour day before coming to my night class.

Those were interesting times. That was long before deregulation in the telephone industry brought the collapse of stock values and the huge cash flow which had long funded so much technology at those labs.


#14

I also got confused about whether parallel processing affects big-O time complexity. I think about it in terms of number of operations, not time taken per se. Or I have seen it referred to as relative to some base, to remove eg dependency on speed of processor.

But there is the issue of whether the particular GA algorithm is parallelizable; one could then look at how that affected time complexity. Some googling found this but I have not read in detail.
http://geco.mines.edu/workshop/aug2011/05fri/parallelGA.pdf

In general, doesn’t the complexity depend on the GA algorithm, eg how does complexity of fitness function factor in?


(Dan Eastwood) #15

That works too. I was thinking of the biological equivalent, where every member of the population is evaluating its own fitness function in real time; an inherently parallel process.

I’ll take a look at that link later.

The topic title is a bit misleading. A basic EA is dead simple. You can make it more complex by adding constraints, multiple goals, and varying the elimination rules, but it’s not complex all by it’s.
As you note, the complexity that matters is in the fitness function.

I will look into the examples used n the paper - I have no idea what MAX-3SAT and Spin Glasses problems are, other than they are supposed to be “hard”, and possibly “NP-Hard”. I can’t do that right now, so if someone else wants to jump in and do this, be my guest.


(Dan Eastwood) #16

Links to Wikis on the fitness problem examples used in the paper. Putting these here for my reference, and if anyone wants to follow along. It looks like both problems have solutions that can be represented as bit strings.

MAX-3SAT is a canonical complete problem for the complexity class MAXSNP

Sherrington Kirkpatrick Spin Glasses problem