I watched a video on YouTube called “I programmed some creatures. They Evolved.” about an algorithm that simulates evolution (I suppose behavioral evolution). This is very simple and par-for-the-course for many of you, but I was completely blown away by everything about this, starting at his first example where only natural selection is operating on his creatures—no mutation. The whole experience was a great visualization that gave me a nice sense for how evolution works and the problems it can solve.
At an hour long, this isn’t a short video, but for those who are willing to put in the time, I would love to hear what all of you think:
Did you like it?
Did you feel it had problems I should know about?
Does anybody know of software like this (with visuals—I like my picture books) I can get my hands on and play around with?
An old website that can only be accessed using an old browser than can still run FLASH programs has a nice real-time evolutionary simulation that evolves cars to drive on an increasingly rugged terrain: http://boxcar2d.com/
You can’t access it with modern browsers as they refuse to allow FLASH to run, but if you have a virtual machine running (for example) an old windows XP installation with an old browser you can still access the website.
The advantage of this new version is it runs a lot of cars in parallel, rather than testing each car in the population individually(so you see results faster). But there are fewer options for customization of things like number of possible wheels, population size, selection algorithm and so on.
Avida-ED is a browser version of the evolution simulator Avida;
It’s much less visually stunning than boxcar and things like that, because it’s just colored squares on a grid, but what goes on behind the scenes is very interesting as it actually simulates the capacity for self-replication, and the digital organisms use and compete for limited “resources”(CPU cycles, and memory space), making it a true digital life simulator, and mutations can affect it’s ability to replicate and how this happens.
Here’s one that evolves a picture of your choosing:
Thanks for the link. I liked the video. It has a very mellow vibe. It’s demonstrating, not arguing. In addition to demonstrating evolution, it taught me a little about neural networks and programming.
Two things I think could have been emphasized more.
In the example with no mutation, I think a deeper explanation of the amount, nature, and source of the variation within the initial population might be helpful.
Second, there was no mention of genetic drift. That might be beyond the scope of this simple demonstration, though.
Hi Matt
-I thought the program was very interesting.
-The model an algorithms were different then biology but this is to be expected.
He starts from a more compact code (letters and computes with binary code) Biology starts with a simpler4 bit code (DNA) and converts to a 20 bit code (proteins)
-This was more sophisticated then prior evolutionary algorithms I have seen.
@Rumraket beat me to BoxCar2D, but I can add a few comments.
Genetic Algorithms are a very simple idea:
start with a population of "random"objects,
evaluate the performance of the object by some criteria.
discard some of the worst performing objects.
create "mutants of the remaining objects and add them to the population.
Go to #2.
It can be more complicated, with multiple performance objectives, complex rules for culling the population, and different modes of mutation, depending on the application. GA’s are very efficient at solving problems, on the order (Big “O”) of N x log(N). Practically speaking, this means those “Impossible to evolve” sequence might be found in a few thousand cycles.
I agree that genetic algorithms are simple, but what I find fascinating (and non-trivial), is #2. Defining the criteria and what performance means may be very complex and require a lot of thought.
Yes and no. Defining the criteria and performance to accomplish a particular purpose can be quite difficult, but the GA will happily optimize whatever it is given. My first experience tinkering with a GA, I set up a sort of constrained optimization. At least that’s what it was supposed to do - the GA found a way around my constraint and ignored the goal I was hoping for.
I ended up changing the code so that units past the constraint could no longer reproduce directly, but could still be one of the “parents” in mixed reproduction. This was just my tinkering, but I found it improved the speed of optimization so I kept it and excluded those results from the final results.
I also found that #3 makes a big difference. Larger populations are generally better, but a varying population size with occasional bottlenecks seemed to improve convergence too.