Eastwood's Evolutionary Simulation

(S. Joshua Swamidass) #1

Continuing the discussion from Hunt's 2007 Critique of Axe:

Sounds interesting.

@Dan_Eastwood, can you please give us the full details of your simulation? Ideally provide the code too. Even better, put it on http://repl.it so its easy to fork and use. I ask everyone to refrain from commenting till after he explains it.

1 Like
(Dan Eastwood) #2

@swamidass You misunderstand - I did not create a simulation of evolution - I used a genetic algorithm to find a type of optimum solution in a complex setting having nothing to do with biology. Solution evolve, but this was a "for fun:exercise in learning about genetic algorithms.
Having come this far, I’ll describe it a bit. You can decide if this warrants discussion. I can’t present the code here (ask me privately).

The setting is a game, written by a friend of mine who occasionally consults me on mathematical/statistical questions. In this game players can design their own ships and use them in the game. Without going into details, the ship design process is very complex, spreadsheet assisted, and deliberately made difficult for players to create a single game-winning design. For anyone reading this who might be familiar with the game Star Fleet Battles, this game is not entirely different, but it tries to capture any Sci-Fi setting rather than a single one.

http://www.adastragames.com/squadron-strike/

This is getting long and I have a busy day, so short version … I wrote code to implement the combat damage allocation in this game, simulated survival time for ships given a baseline “attack”, looked at the resulting survival distributions, and calculated an approximate distribution to how much total combat damage each ship to do in “return fire”. This is a sort of utility function, and turns out to be a good measure of effectiveness in the actual game. It is related to a mathematical relationship used in real military simulations (Lanchester’s Laws). That’s the fitness function, and the most difficult part of the coding.

Google Photos

Ship design requires a string of roughly 60-100 numbers and MANY choices on the part of the player. The “genetic” part of my program did not implement the whole design process, but only the ways players could rearrange a set of “parts” within a fixed design, some of which lead to greater survival than others.
I coded random variation into the arrangement process, and later added a second step for a sort of sexual mixing: child ships could receive complete sections from either parent. The way I did this does NOT match how sexual mixing and mutation actually work (that was not my intent).

The end result: The GA found arrangements to preserve key functions of the ship designs, optimizing their utility for the game. What human’s perceived to be optimal arrangements weren’t always so, and the point-value system within the games was changed to more closely reflect actual with-game utility (fitness), and a better way to set up “fair” scenarios. In the process I observed GA in action, which was interesting and fun (which was my intent). I can’t quickly described all the different ways I looked into this data, so I won’t try.

This is intentionally a “toy” I created a toy to learn about GA’s, but I see no barriers to applying this simple optimization method to much larger more complex problems and fitness functions. The GA itself is almost trivially easy to code, it’s the fitness function that is difficult.
Doing this in the setting of a game gave me a completely known function, so all I have to do was figure out how to code it. Coding a “real” biological fitness function is a non-trivial problem.

2 Likes
(Dan Eastwood) #3

For anyone interested in playing with a genetic algorithm, I recommend you start at http://boxcar2d.com/. WARNING: You may be spending the rest of you day playing with this! :slight_smile:

This site: http://rednuht.org/genetic_cars_2/ is similar to Boxcar2d; it runs much faster but does not allow nearly so many options and settings.

To recap my conclusion above, genetic algorithms are quite simple to code and use, but fitness functions can be arbitrarily hard to implement.

I did not discuss the difficulty of optimizing a given fitness function, which is a fair question, but not one I can address today.

2 Likes
(S. Joshua Swamidass) #4

I was about to post the same.

Nope, this is exactly along the lines I imagined what you were doing. They key point for @colewd is that @Dan_Eastwood was correct. You did not encode the right answer into the simulation. Instead, the simulation found an answer that you a priori did not know.

1 Like
(S. Joshua Swamidass) #5

Why not? It would be a good supplement to this. Any way to modify it a bit to make it easier to run and make sense of, and perhaps avoid any IP issues…

This part might be the most interesting part to see, so others can play with it too.

(Dan Eastwood) #6

There are reasons (detailed privately) that I cannot share this code, but it really not code that anyone else could pick up and use.

Perhaps I can provide a description of the basic algorithm and how/why I did some things in certain ways. Not today, but I’ll try to make time for this.

1 Like
(Dan Eastwood) #7

One lesson about GA’s I learned is very relevant here, I think.

I wanted to constrain solutions to limit the total size (or carrying capacity) of the ship, because “more is always better” under the fitness function. I penalized the score of solutions that exceeded the space limit, making those solutions “less fit”. The GA search always found that it could circumvent the penalty by making the ships even bigger to overcome the penalty. I had to abandon the penalty approach and switch to limiting reproduction of out-of-bounds solutions.

Lesson learned:
GA’s can jump a gap/valley/barrier in the fitness landscape. In fact it may be hard to prevent them from doing so. In a multidimensional fitness function of high order, there could easily be multiple paths around any given barrier. Someday I hope to express that thought more formally.

Note: I could have tried even bigger penalties, but this turned out not to be a good optimization approach. The GA did better when it was allowed to explore both sides of the boundary.

The other lesson learned: GA’s are smarter than I am. :slight_smile:

1 Like