The population size is 1. I’m using the same (1+1)EA evolutionary algorithm used in chapter 5 of Compositional Evolution, because I thought this was also a good excuse to play with some of the concepts in that book as well. I didn’t expect to be able to stick with that algorithm this long, but so far it has been fruitful.
Only point mutations are allowed right now, no inserts or deletes or larger scale changes. I ultimately intend to include more variety here. Also, the genome is represented as a character string using the ACGT alphabet, and the probability of mutation varies with the type of mutation, e.g. “transitions” are more likely than “transverions.”
I have not yet implemented recombination, but intend to.
The first set of experiments I posted today were not about constructive neutral evolution. They were exploring what kinds of adaptive solutions are possible in this space, mainly just to understand the domain a little better. I used a particular evolutionary algorithm that is comparable to hill-climbing optimization.
The second set of experiments were about constructive evolution. While the same evolutionary algorithm was employed, the starting population already had an optimal solution and so no “hill climbing” occurred. Instead, the evolutionary algorithm provides negative selection, preventing loss-of-function mutations from persisting. Here, loss-of-function means inability to achieve a perfect score as a team. This selection does not favor any particular way of achieving that score. So the distributed solutions that emerge are not adaptive.
Here’s what the scores look like if the four identical players start in the same location with the same genome that encodes a full solution. The fact that all of them tend toward a score of 3 suggests specialization on a single enemy each. I wanted to try to visualize that more explicitly using the player location over time, but so far the aggregate data seems to noisy to make anything clear.
Haven’t read the right comics. Do I really have to go see some superhero movie? I hope not.
At The Skeptical Zone in February, 2016 (http://theskepticalzone.com/wp/wright-fisher-and-the-weasel) I explored the connection between Dawkins’s Weasel and the Wright-Fisher model of theoretical population genetics. In the Weasel one always moves uphill on the fitness surface. In effect, selection is infinitely strong. In a Wright-Fisher that comes close to being a Weasel, one sometimes moves downhill.
To that end, here’s an example session without the X-Men trappings. The 4 blue players/agents have to take out the purple ones, using their ‘genome’ encoded action sequences. All 4 players contribute partially to the overall success. They evolved via neutral (with respect to overall success) mutations from a ‘genome’ that allowed one player to succeed alone while the other 3 contributed nothing to the score.
After my sabbatical ended I didn’t have as much time to work with this as I had wanted. But I’ve been noodling around more recently, trying to get a better sense of how it behaves and what it might or might not be useful for.
It is possible to set up challenges which are very difficult/practically impossible to solve because there is no adaptive pathway. So in addition to illustrating constructive neutral evolution, it can also illustrate some of the limitations of RM + NS (or really just RM). Obviously there are other options in that regard (such as the problems discussed in Watson’s Compositional Evolution. But I like that this has a concrete and visually dynamic phenotype.
Happy to discuss possible applications & modifications if anyone thinks they might have a need.