Genetic Algorithms in Ruby, Part I

Share this article

Charles Darwin produced the theory of the “survival of the fittest” (though the term was coined by Herbert Spencer), which means that the fittest and quickest adapting organisms in a specific environment will always prevail over the others. This makes a lot of sense; the organisms that are stronger are more likely to survive and also more likely to pass on their genetic code. What makes this interesting is that there are occasional mutations in the chromosomes of organisms. So, a mutation might produce an elephant whose vision was much better than the others, who would then be likely to survive longer since he’d be able to avoid lions better than the rest, and pass on his special trait. This means that if we want to “optimize” the strength of a certain species of animals, we can probably just wait around for a couple million years while mutations occur and the fittest win over the rest.

Computer scientists and mathematicians of the 1950’s started to borrow ideas from biology and brought them to computer science. One of those ideas was genetic algorithms. With genetic algorithms (GAs), we replicate the process of evolution on a computer to optimize a function or to perform search queries. In plain english, we define a rule that tells you when an “organism” is better than another. We mutate these slightly and have the fittest reproduce to get (after some number of cycles) an optimal “organism”. I say “organism” (in quotes) because there aren’t really any organisms; we’re just using a simplified model of one. So, it turns out that genetic algorithms enable a computer to make “smart” decisions for optimization or searching (or a host of other applications).

If you didn’t really understand much of the above, that’s perfectly fine. We’ll be writing our own genetic algorithm code (in Ruby, of course) to solve an optimization problem, and every step of the way will be explained with much more depth than the above.

Details

Let’s start with an example problem, from which we will develop the idea of genetic algorithms.

Suppose we have a machine, which, when given a binary number four bits long (meaning it has four digits in binary) performs an XOR operation on it with the number 1010 and outputs the result. For example, if the input is 1100, the output would be 0110 (or, in decimal, 6).

We want to find out what input would make the output of this machine as great as possible.

When using genetic algorithms, we follow some steps:

  1. An organism is represented by a chromosome, which is simply a string
  2. We randomly generate a bunch of organisms (i.e. strings)
  3. We determine the fitness of the organisms by taking the string of each and recording the return value from the “fitness function” (i.e. the rule that the machine follows; in our case, the XOR operation)
  4. We assign probabilities to each organism based on fitness (i.e. higher fitness implies higher probability.) Afterwhich, we pick an organism that we then “reproduce”, and the organisms that were unable to reproduce die out.
  5. All of the organisms find mates with which the strings are “crossed over”.
  6. We perform many iterations of this cycle.

Again, don’t worry if that went too quick for you. We’ll walk through each step in detail.

So, our organisms in this case are very easily represented; they’re just bit strings. We’ll have a binary number 4 bits long; something like 1011, which represents the input of the machine.

Then, we have to randomly generate these bit strings, which is quite easily done.

We take all of the randomly generated bit strings (called the organism/chromosome pool) and measure their fitnesses by passing them through the fitness function, which is the XOR operation.

You might be wondering why we don’t just have a pool large enough that it can encompass all of the possible inputs. Then, we could just plug them into the fitness function and pick the organism with the highest fitness; it would be just 16 total bit strings that we have (since we’re considering four bits). Suppose that instead of four bits, we were considering 40 bits. If we were to build up a pool of all the possible bit strings, we’d have 1099511627776 total bits; that’s a lot! In short, it is because of efficiency that we don’t consider all possible inputs, and, in some cases, there may be infinitely many of them. In fact, the very point of using genetic algorithms is to improve optimization efficiency, if this isn’t a concern, brute force methods are much easier to implement.

Back to the algorithm. The probabilities are assigned based on fitness, and, a few organisms are picked that are then “cloned”, i.e. a copy of the chromosome is inserted into the chromosome pool.

Then, to produce new chromosomes (that are based on old ones), the chromosomes are mated with other chromosomes with parts of the chromosomes interchanging with one another.

So, that’s the basic idea of genetic algorithms. Don’t expect to understand all of it just yet; but, a general idea should be starting to form. Just keep reading; it’ll all fall together.

Organism

The basic structure of genetic algorithms is the organism, so, it will beneficial for us to define a class around it.

[gist id=”3146253″]

That gives us a data structure onto which we can attach methods, so that this component of the GA code will be entirely reusable. The constructor generates a random bit string of a given length when one isn’t provided, since that’s something we’ll be doing a lot.

If you test this class’s constructor out with a arguments like 4 for length and 100 for so_many, it spits out a gigantic list of binary numbers as an output (if you choose to print the bitstring for each Chromosome object).

I don’t know about you, but, I feel pretty cool when I have binary numbers scrolling down my screen!

Fitness

Now, we need to get our fitness function down flat; that’s what everything is hinged on. We are XORing the input with 1010 and then returning the result.

Here it is in our beloved Ruby:

[gist id=”3055735″]

If you test it out with 1100, you should get a result of 6 or 0110 in binary, which is correct.

Weighted Randoms

A special challenge we have to deal with is “weighted” random generation. So, we have to assign probabilities to each organism and then pick based on those probabilities.

This is often called a roulette wheel selection, since it resembles a roulette wheel, with portions being of different sizes. This isn’t the fastest way to do it, but it’s one of the simplest and easiest to implement.

We are able to do this by using an array on which portions are reserved for each chromosome depending upon what the fitness of the chromosome. Here’s the implementation added to our Chromosome class:

[gist id=”3146263″]

We’re doing the process in two parts; one in which the roulette “wheel” is set up, and, another, in which a random number is chosen from the roulette wheel. Of course, the roulette wheel should be updated just before the roulette is going to be used.

In Part II of this article, I will finish our study by focusing on bits like reproduction, finding mates, and “crossing” our chromosomes.

Dhaivat PandyaDhaivat Pandya
View Author

I'm a developer, math enthusiast and student.

Share this article
Read Next
Get the freshest news and resources for developers, designers and digital creators in your inbox each week