For a long while I’ve wanted to see a time-lapse view of how an image evolves. I imagined there were dynamics and nuances that were impossible to see watching evolution second-to-second. So after writing my last post (and with a small nudge from my friend Daniel), I set the evolver to output an image after each 1% of remaining error reduction. So when error was reduced from the initial state (the baseline counted as 0% fit or 100% errored), by 1% it would output a screenshot. Then again when that 99% was reduced by 1% and so on i.e. 98.01% then 97.0299% etc.
This meant that the system would continue outputting images over longer and longer timeframes (in theory forever, but in practice until it hit a wall or I got bored of waiting).
Here’s the result of 1.5 million generations of evolution and 2000 triangles, over about 10 hours of processing:
It’s lovely to see the dynamics of the early triangles posturing for position. If you follow one over the whole evolution period, then you can see how they interact with the introduction of new elements, sometimes remaining stable in a given position for a few seconds, then adjusting as they interact with other triangles.
I think evolution looks so methodical at the end (with all the sudden introduction of little squigly beard bits*) due to the recent introduction of starting the new triangles with colouration based on the background image. By the time you get to that late stage, large, randomly-coloured triangles aren’t going to cut it. My intuition is that starting with randomly-coloured vertices would just make evolution a lot slower, waiting for luck and happenstance. I shall test that theory tonight!