A First Stab at Evolution

I had two goals in reimplementing Hillis’ minimal sorting networks experiment: to start carving a generalised framework for evolutionary experiments and to try and improve on my previous attempts from university to find optimal sorting networks.

While it would be all well and good* to sit down and try to construct a detailed design for an evolutionary framework that would be all things to all experiments, I thought a much better idea would be to consider the general format a framework might take, then hone it by implementing a series of varied and increasingly complicated experiments with it.

For a first-pass I thought I’d stick to the format of a traditional genetic algorithm. There are other formats of evolutionary computing with varying degrees of similarity to the biological evolutionary process (common ones being hill climbing or simulated annealing) and while I’d like to incorporate those ideas into the larger system, focusing on genetic algorithms first definitely makes sense.

As it might be useful to separately link to, refine and crystallise my description of a genetic algorithm, you can read all abaaaaht it here.

Conflicting cohorts

To co-evolve the minimal sorting networks, I need two populations in opposition to each other: A population of sorters and a population of unsorters. Both of these are an ordered list of pairs of indexes e.g. ((4,2), (5,2), (3,1), (4,5)). Each pair refers to two indices into the list they’re trying to sort or unsort. For instance the pair (4,2) refers to the 4th and 2nd value in the list that’s being processed. In the case of the sorters, if the 4th value is less than the 2nd value then the two values will be swapped into the correct order. In the case of the unsorters then the two values will be swapped either way.

Each individual can then be tested to see how well they do their job. How muddled does an unsorter leave a list of numbers? How sorted is it again once a sorter has tried to process it? I chose a fairly straight forward metric for deciding how sorted a list of numbers was. The unsortedness of that list being the total of the distance of every number from where they should be if the list was sorted. For instance, if the list is four long, then the sorted list is 1,2,3,4. The unsortedness of this list is zero.

If the list is unsorted so that it looks as follows: 1,2,4,3 then the 4 is one away from where it should be, as is the 3, so the unsortedness of this list is two. If the list looks like this: 4,3,2,1 then the 4 is three away from where it should be, the 3 is one away, the 2 is one away and the 1 is three away, for a total unsortedness of eight.

From this we can give the networks a fitness score based on the final unsortedness of the lists they were trying to sort / unsort. The unsorting networks have the unsortedness score added to their fitness, so the more unsorted, the higher their fitness (with a minimum of zero if all the lists are perfectly sorted). The sorting networks have the unsortedness metric removed from their fitness, so the more unsorted, the lower their fitness  with a maximum of zero and negative otherwise). As the sorting networks are only being compared to other sorting networks to decide which ones to use as parents, it doesn’t matter that their fitnesses are negative or, at best, zero. So long as we can compare them usefully, that’s all that matters.

Adding the mutant gene

For mutation operators, a sorting network could add a swap, remove a swap, exchange the position of two swaps in the network or change the value of one of the indices of a swap (for the previous example of (4,2) this could be mutated to (4,1) or (5,2) or some other value for one or other of the indices).

Unsorting networks could only change one index of a single swap. I wanted them to be of fixed length so adding/removing swaps didn’t apply. Now looking at the code, they didn’t exchange the positions of two swap as a mutation operator. I imagine that’s a bug that crept in due to that mutation operator only being added to the sorting networks at a later stage. A complete-lack-of-copy-and-paste error.

One final important addition was added to the sorter fitness: a value between zero and one depending on the length of the network. Specifically 1 / number of swaps. So if there were 100 swaps, then 0.01 was added to the fitness. If there were only 10 swaps then 0.1 was added to the fitness. This meant that shorter networks of the same efficacy were given slightly higher fitness and thus were more likely to be bred from.

Testing, testing, testing

After finishing writing the system and removing as many bugs as possible (though obviously not all), I started testing against lists of length 8. The best minimal sorting network for lists this length is 19 swaps and, as I began by setting the sorting networks to contain 100 swaps initially, evolution quickly found a solution that perfectly sorted the length 8 lists and started reducing the sorting networks number of swaps. In fact the system found solutions of 19 swaps within about a minute. This was obviously quite heartening and I started upping the list length to 9, then 10, then 11, each time fairly quickly finding a solution equal to the best possible after an increasing number of minutes with each increase in list size.

However when I got to lists of length 12, things started to perform less well. The solutions got shorter and shorter, but as they were only testing against the best unsorter, not against all lists to prove they could solve every permutation, they would often drop below the number of swaps required to solve all those permutations as the pressure from the unsorters to be robust wasn’t high enough. These experiments repeatedly resulted in solutions that weren’t of any use after reducing to a small amount of swaps and never increasing to a useful size again.

After trying a few ideas in testing against all permutations before allowing the sorting networks to remove a swap as part of the mutation process (something that didn’t feel right at the time, but seemed like a valid line of enquiry), I ended up trying to increase the population size of the unsorters and testing each sorter against all of the unsorters, rather than just the best one. This seemed to improve things a bit, but still didn’t stop the sorters from eventually producing malformed solutions the population didn’t recover from. As all of the unsorters were only marginally different in their genotypes, the unsorted lists tended to only be marginally different as well, so didn’t offer a wide range of tests for the sorting networks.

Populations of populations

Finally the idea occurred to me of creating multiple, unrelated populations of unsorters and testing the sorters against the best member of each of those populations. This resulted in much more varied tests for the sorters and much more robust sorting networks. Ones that would still occasionally remove swaps without being able to solve all permutations of unsorted lists at that time, but would quickly be pulled back as the unsorting networks evolved and provided more effective lists to test against.

Using this I managed to equal the best found networks for every list length up to 17 (as stated here at least) apart from length 15, which has remained stubbornly out of reach. As the search space increased exponentially with the length of the list, by the time I reached a list length of 16 to sort, it took days to find a solution, with millions of generations and billions of individuals tested.

Sadly, after several attempts at different list lengths and leaving the system running for up to a week at one point, I failed to better any of the best known networks (it was a long shot, but I was naively hopeful). I did beat my university efforts at least, although with a much more powerful machine (thanks to Moore’s Law). It was at this point I left minimal sorting networks and looked for something more interesting to evolve. Sorting networks not being the sexiest subject matter with which to explain to people what the hell you’re doing or why.

The more interesting thing I found was this experiment in evolving images from triangles. You can see my results for that next time.

*for some terrible definition of good

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s