Cheng Xi Tsou

·

Follow

Published in

·

7 min read

·

May 18, 2021

--

In my recent lecture on AI (CS4100), I came across an interesting concept: a genetic algorithm. As described in “Artificial Intelligence: A Modern Approach” by Stuart et al., evolutionary algorithms can be seen as variants of the stochastic beam search that attempts to copy the metaphor of natural selection in biology. The textbook describes the algorithm as such: “there is a population of individuals (states), in which the fittest (highest value) individuals produce offspring (successor states) that populate the next generation, a process called recombination.” When I first heard of this concept, it reminded me of this video, a neural network learning to play Super Mario World using genetic algorithms, something that I still want to try and implement someday. This concept seemed so abstract yet it made perfect sense. After all, you could think of an algorithm working towards a solution as evolution. In this blog post, I will be applying a simple genetic algorithm to the classic 8 queens problem.

The 8 queens problem is simple. On an 8x8 chess board, the queen can move any number of squares horizontally, vertically, and diagonally. Normally, there is one queen per side on the board but for this problem, there are 8. This can be generalized to N queens on an NxN board. The goal is to place 8 queens on the board such that every pair of queens aren’t attacking each other. In the picture above, you can see that every queen is not in the line of sight of another queen. Here’s an example of a board state that is not the solution:

The pair of queens on the 4th and 7th columns are attacking each other diagonally.

To apply the genetic algorithm framework, we need to convert a board state into a viable input. In genetic algorithms, each individual in a population is a string over a finite alphabet, similar to that of a DNA sequence. We can convert a chessboard into a string of numbers so that the index of each number is the column number and the number at each index is the row number, starting from the bottom left. The picture above would convert to 16257483. In this experiment, I represented each string as a list, so [1, 6, 2, 5, 7, 4, 8, 3].

The visual below is a rundown of how the genetic algorithm will work:

Starting off, I randomized a population of individuals with POPULATION_SIZE = 10. Then, I decided on a fitness function that would mimic the “natural selection” process of the algorithm. The simplest one would be the number of non-attacking pairs of queens. The solution to the problem would be the 8 choose 2, the possible number of pairs of queens. In the code below, I define NUM_QUEENS = 8.

`def fitness_score(seq):`

score = 0 for row in range(NUM_QUEENS):

col = seq[row]

for other_row in range(NUM_QUEENS):

#queens cannot pair with itself

if other_row == row:

continue

if seq[other_row] == col:

continue

if other_row + seq[other_row] == row + col:

continue

if other_row - seq[other_row] == row - col:

continue

#score++ if every pair of queens are non-attacking.

score += 1

#divide by 2 as pairs of queens are commutative

return score/2

Next is the selection process. We define a mixing number, ρ, which is the number of parents of an offspring. Normally, ρ = 2, (or ρ = 1 for asexual reproduction). However, as this is only a simulation, we can choose ρ > 2. For selecting the parents, one way to do it is to choose a random n number of individuals and have the top ρ fitness scoring individuals be the parents. For this experiment, I capped the number of parents to half the population size, and selected parents with probability proportional to their fitness score.

Then, we have the crossover. When ρ = 2, I randomly chose a crossing point (index in the list). The first part of the first parent is crossed with the second part of the second parent to produce an offspring. The process is repeated for the second offspring. A visual below shows what happens:

For example, we have the parents 1234 and 5678 where the crossing point is 2, then the offspring produced is 1278 and 3456.

I wanted to also account for ρ > 2, so I decided to implement the crossover this way: generate ρ-1 random crossover points between [0, NUM_QUEENS]. Using these crossover points, we will generate permutations of offsprings that are combined from ρ parents. Given x parents, there would be x choose ρ * ρ! offsprings. In the code below, I defined MIXING_NUMBER = 2.

import itertoolsdef crossover(parents):#random indexes to to cross states with

cross_points = random.sample(range(NUM_QUEENS), MIXING_NUMBER - 1)

offsprings = []#all permutations of parents

permutations = list(itertools.permutations(parents, MIXING_NUMBER))for perm in permutations:

offspring = []#track starting index of sublist

start_pt = 0for parent_idx, cross_point in enumerate(cross_points): #doesn't account for last parent

#sublist of parent to be crossed

parent_part = perm[parent_idx][start_pt:cross_point]

offspring.append(parent_part)#update index pointer

start_pt = cross_point#last parent

last_parent = perm[-1]

parent_part = last_parent[cross_point:]

offspring.append(parent_part)#flatten the list since append works kinda differently

offsprings.append(list(itertools.chain(*offspring)))return offsprings

Lastly is mutation. For each number in a sequence, there is an arbitrary MUTATION_RATE = 0.05 of changing to a different number.

`def mutate(seq):`

for row in range(len(seq)):

if random.random() < MUTATION_RATE:

seq[row] = random.randrange(NUM_QUEENS) return seq

Using the functions we wrote, I tested the algorithm on the parameters POPULATION_SIZE = 10, NUM_QUEENS = 8, MIXING_NUMBER = 2, and MUTATION_RATE = 0.05. Here is the initial population from a test run:

When I was generating the random populations, I did not expect the fitness scores to be this high. The highest was a 22, only 6 away from the goal of 28. To check if I wasn’t just getting lucky, I generated 100,000 random board states and got a mean fitness score of 20.13518 and standard deviation of 2.3889, so 22 was normal.

After running the algorithm, the solution was found on the 162nd generation. This was unexpectedly fast, so I tried to run it again. This time, it took almost 3000 generations. My guess is a random positive mutation could’ve happened along the way to accelerate the evolution or lucky recombination.

Running the algorithm 200 times, I obtained the following stats: a mean of 3813.53 generations with a standard deviation of 27558.146. Looking at the standard deviation, obviously, something went wrong. The highest generation was 377241 and the minimum was 10. As for the 377241 outlier, I would guess that there were individuals in the population with a high fitness score but with repetitive positions on the board, resulting in low variation with recombination, so the algorithm had to rely on a mutation to get out of the deadlock.

If we look at the results between the first and third quarter, our mean and standard deviation is not too far off. When I adjusted the results with an outlier removing function from Stack Overflow where the median of distances from the median is less than a value m=2, the stats look to be more promising.

If we look at a pure brute force method of guessing the correct solution, assuming a normal distribution when generating an individual, the probability of getting the solution is about P(solution) = 0.0005. It would take about 1386 generations for a 0.5 chance of guessing the solution, or 5990 generations for a 0.95 chance. Compared to the mean of 280, on average, the genetic algorithm takes 21 times faster. In conclusion, using a genetic algorithm can be a way to solve the 8 queens problem.

Overall, this experiment was a fun way to explore the genetic algorithm. Objectively, I don’t think that a genetic algorithm would be the best way to solve the 8 queens problem, and I do suspect that a scholastic beam search would be much more efficient. If we look at recombination in the context of mixing board states with queens placed, it does seem like random offsprings are produced rather than “stronger” offsprings. I think some other experiments I could still further explore would be seeing the algorithm work for N queens or increasing the mixing number which would give even more randomized offsprings.

Code available at: https://github.com/chengxi600/RLStuff/blob/master/Genetic%20Algorithms/8Queens_GA.ipynb