Halloween Themed Math Puzzles

free-halloween-powerpoint-background-8

Happy Halloween From  The Muse Garden!

In the tradition of my Holiday Math Puzzles, I’m here with an appropriately themed puzzle for this time of year.

Candy Distribution

Halloween-Candy1

It’s that time of year all right. You’re out and about, trick or treating with your friends or family and when you come home, you decide to dump all the candy out on the floor to sort through it. But then, as siblings often do, you begin to bicker about who has “more” than the other. In fact, there are some candies that you really like, and some you don’t like. You’d rather have a bunch of chocolate than a bunch of peppermints, for instance.  But wait a minute! Of course you can’t like the same thing! Your sibling actually likes peppermints!

Here’s a table of the different candies you have. Each candy has a “value” to it; that is, how much you “want” it. Try to split up the candies such that you and your sibling both have as equal value as possible at the end. And no fighting!

Candy Quantity Your Value (per piece) Sibling’s Value (per piece)
Candy Corn 150 25 50
Peppermints 50 5 50
Peanut Butter Cups 10 100 75
Hershey Bars 25 50 10
Kit Kat 20 75 30

Chopsticks Game – A Combinatorial Challenge

point-finger2trans

So I don’t know if anyone else is familiar with this game, but I just remember playing it with friends in middle school and it occurred to me the other day that it would be an interesting game to analyze combinatorially, and perhaps write a game playing algorithm for. This game can be found in more detail here: http://en.wikipedia.org/wiki/Chopsticks_(hand_game)

Players: 2+.

Rules of Play: Players each begin with two “piles” of points, and each pile has 1 point to begin with. We used fingers to represent this, one finger on each hand.

On each turn, a player can choose to do one of two things:

  1. Send points from one of the player’s pile to one of the opponents piles. So if Player 1 wanted to send 1 point to Player 2’s left pile, then Player 2 would have 2 points in the left pile and Player 1 would still have 1 point in his left pile. Player 1 does not lose points, they are simply “cloned” over to the opponents pile.
  2. If the player has an even number of points in one pile and zero points in the other pile, the player may elect to split his points evenly between the two piles. This consumes the player’s turn. Example: if Player 1 has (0     4) then he can use his turn to split his points, giving him (2     2).

If a player gets exactly 5 points in either pile, that pile loses all of its points and reverts to 0. However, if points applied goes over 5 (such as adding 2 points to a 4 point pile), then the remainder of points are added. (meaning that 4 + 2 = 1). The opponent simply gets points mod 5.

If a player gets to 0 points in both their piles, then they lose. The last person that has points remaining wins.
=========================
Okay, so let’s break this down. Here’s an example game for those of you that are more visually oriented (follow the turns by reading left to right, moves are marked with red arrows):

Okay, so let’s point out a few things about this game.

  • On turn 4, player 2 adds 3 points to player 1’s 2 points, making 5. The rules state that any pile with exactly 5 points reverts to zero.
  • On turn 7, player 2 adds 3 points to 3 points. 3+3=6 as we all know, but 6 \equiv 1 \mod 5, so player 1 now has one point in his pile.
  • On turn 13, player 2 decides to split his points, turning his one pile of 4 into two piles of 2. This consumes his turn.
  • On turn 15, player 2 adds 2 points to player 1’s 3, thus reverting his pile to zero. Player 1 now has no more points to play with, so player 2 wins.

We can think about this game as a combinatorial problem. What are the optimal positions to play? How would one program a computer to play this game? I plan to create an interactive web game where players can try this for themselves.

A Genetic Algorithm for Computing Ramsey Numbers: Update

Friends_strangers_graph

All the 78 possible friends-strangers graphs with 6 nodes. For each graph the red/blue nodes shows a sample triplet of mutual friends/strangers.

In my last post on this topic, I discussed how I was working on a genetic algorithm to search mathematical graphs for elusive properties called Ramsey Numbers. (For a refresher on genetic algorithms,  visit here, and for a refresher on Ramsey Numbers, visit here). I’ve been doing some work on it since then (check out the code here), and I thought I would describe some improvements and further progress I’ve made in this area.

New features:

  • colorings dumped to a file at the end of each run
  • ability to load in data sets from file, further refining of the data than starting from scratch each time

The next problem I ran up against while working through this was that even if I am able to load in previously analyzed data, I still only have one fitness function that checks a static set of edges. As I see it, there are two ways to solve this:

  • Make the current fitness function dynamic; that is, it tests a different set of edges every time. However, this is counterproductive to the purpose of the program “eliminating” certain sets of edges in each “round”. However, this would be easier to maintain than the other option, which is
  • Make a “FitnessHandler” method that takes in a value for which method to run, and uses that to determine what set of edges to test. However, this would lead to a lot of extra code and overhead. I’m thinking having a static variable at the beginning of each run with what “fitness method” to start on, so that it doesn’t have to start on round one each time.

I haven’t fully decided which of these I will go with. I feel like the second one fills my purpose of methodically “weeding out” the improbable graphs, but its going to be a lot of extra work. Oh well, nothing worthwhile ever came easy…

Leave a note here or on my github if you have suggestions!

Intractable Problems — Part Two: Data Storage

This post continues my series on intractable problems. In this installment, I will talk about problems relating to Data Storage. As a refresher, remember that an intractable problem is one that is very computationally complex and very difficult to solve using a computer without some sort of novel thinking. I will discuss two famous problems related to Data Storage below, as well as provide a few examples and references.

Part Two — Data Storage

486px-Knapsack.svg

Knapsack — given a set of items with weights w and values v, and a knapsack with capacity C, maximize the value of the items in the knapsack without going over capacity.

To start with, here is a small example that I refer to in this previous post. If you’re trying to place ornaments on a tree, you want to get the max amount of coverage possible. However, the tree can only hold so much weight before it falls over. What is the best way to pick ornaments and decorations such that you can cover as much of the tree as possible without it falling over?

I actually saw a really interesting video describing an example of this problem in this video (watch the first few minutes). The professor here sets up a situation of Indiana Jones trying to grab treasures from a temple before it collapses. He wants to get the most valuable treasures he can, but he can only carry so much.

Class: There are several different types of knapsack problems, but the most common one (the one discussed above) is one-dimensional knapsack. The decision problem (can we get to a value V without exceeding weight W?) is NP-Complete. However, the optimization problem (what is the most value we can get for the least weight?) is NP-Hard.

References: 

Wikipedia Page – General discussion of the Knapsack problem, different types, complexity, and a high level view of several algorithms for solving

Coursera Course on Discrete Optimization – The source for the above video and a great discussion of not only Knapsack but quite a few of these problems

Knapsack Problem at Rosetta Code – a good example data set and a variety of implementations in different languages

============================

TetrisPieces2

Bin Packing — given a set of items with weights w and a set of n bins with capacity c each, place the items into the bins such that the minimum amount of bins are used.

This problem is very similar to the knapsack problem found above, but this time we don’t care how much the items are worth. We just want to pack them in the smallest area possible. Solving this problem is invaluable for things like shipping and logistics. Obviously, companies want to be able to ship more with less space.

A more commonplace example could be thinking of this problem as Tetris in real life. Consider that you’re moving to a new place,  and you have you and your friends car in which to move things. How can you place all the items in your cars such that you take up space the most efficiently?

Some progress has been made on reasonably large data sets by using what is called the “first fit decreasing algorithm”. This means that you pick up an item, and place it into the first bin that it will fit in. If it can’t fit in any of the current bins, make a new bin for it. Decreasing means that before you start placing items, you sort them all from biggest to smallest. You probably do this in your everyday life. If you want to pack a box, you start with the big items, right? No need to put lots of small items on the bottom. By getting the big items out of the way first, you can be more flexible with the remaining space because you will then have smaller items.

Class: This problem is NP-Hard.

References:

Wikipedia Page – a high level description of the problem

First Fit Decreasing Paper (pdf) – this is a technical paper describing computational bounds for using the first fit decreasing algorithm. Not for beginners.

3D Bin Packing Simulation – looks like a resource for companies to use to pack boxes and such

============================

I hope this provided a little taste of why these problems are so important. If you know any other good resources please let me know.

Intractable Problems — Part One: Set Problems

My professor and advisor Dr. Alice McRae provided a list of intractable problems for us to ponder in our genetic algorithms class, and I thought I would expand on some of them here for reference. All of these problems are intractable, which means that they are very, very difficult to solve precisely with a computer.

Most of the problems on this list are what is known as NP-Complete problems. If the complexity classes P and NP are not equal (as is widely believed by many researchers, but not proven), then NP-Complete problems cannot be solved by a computer in a reasonable time frame. In theory, with an infinite amount of time we could produce answers to these problems but time and computing power is finite, no matter how many technological advances we make.

We have seen some of these difficult problems before in previous posts: Genetic Algorithms for Ramsey Theory and The Travelling Santa Problem, as well as Introduction to Genetic Algorithms all have good examples of these types of problems.

I will be presenting these problems in multiple parts, with my comments and references on each one.

Part One — Set Problems

Maximum 3-Dimensional Matching — given a set S of ordered triples of the form (x,y,z), find the largest possible subset of the triples, such that no two elements in the triple share the same x, y, or z coordinate.

Here is a small example: Consider the set \{(1,4,5),(3,4,9),(6,7,8),(1,2,5)\}. We need to find the largest subset of these we can, and the x,y,z values cannot be repeated. In this example, the set \{(3,4,9),(6,7,8),(1,2,5)\} would be a solution. It contains 3 triples and none of the numbers are repeated. As you can imagine, once the sets gets larger, this problem becomes much more difficult.

In layman’s terms, consider a group of boys, girls, and pets. We want to make happy “triplets” of girls, boys, and pets, but no girl, boy or pet can be in more than one group. What is the best way to match these people and pets up so that we have the largest number of groups?

Class: 3-Dimensional Matching (finding any subset that satisfies the conditions) is NP-Complete. The optimization problem (finding the largest subset) is NP-Hard. [1]

References: 

More NP-Completeness Results (pdf) lecture notes from CMU, good explanation of 3DM as well as some other problems, proof of the class of 3DM.

NP-Complete Problems (pdf, pg 267)

============================

Subset Sum/Subset Product — given a set S of integers and a goal sum/product P, find a subset of S that sums/multiplies as close as possible to the goal of P.

This one doesn’t sound as hard, but at larger quantities, this can become very difficult.

Example: Consider the set S = \{ n |1\leq n \leq 100 \} and a goal sum G = 531. Now we need to find a set of numbers between 1 and 100 that we can add together to get 531.

A practical application of this: Say you’re a kid and your parents gave you $10 of allowance for the week. Naturally, you want to get as much good out of that $10 as you can. What is the best combination of things you can buy to add up to as close to $10 all together?

Class: Subset sum and subset product are NP-Complete. Proof can be found at [2]

References:

Subset Sum NP-Completeness (pdf) Scroll down a bit to see the proof that subset sum is in NP-Complete

Dynamic Programming Subset Sum — A description of a dynamic programming technique for this problem

SubsetSum@Home — A distributed crowdsourced BOINC-type initiative to solve subset sum problems

An Improved Low-Desnity Subset Sum Algorithm — a paper concerning algorithms to solve this problem

============================

Minimum Set Covering — given a set S and a collection C of subsets over S, find the fewest number of subsets of C such that all elements of S are represented.

Okay so lets break this down. We have a set S, lets say for example S=\{1,2,3,4,5\}. Now we have a collection C of subsets over S. For our example, C = \{\{1\},\{2,5\},\{3,4\},\{2,3,4\},\{5\}\}.  In this case, the smallest collection we can make that includes all the elements in S is \{\{1\},\{2,3,4\},\{5\}\} which contains 3 elements.

Practical example: You’re a kid again, and looking at a group of video games that are on sale. They are all in combo packs, however. How can you get all the games you want and spend as little money as possible? In other words, what is the smallest amount of combo packs you need to buy in order to get all of the games you want?

Class: the decision problem (does this set contain all the elements we’re looking for?) is NP-Complete. The optimization problem (is this set the smallest set we can make?) is NP-Hard. See [3].

References:

A Probabilistic Heuristic for a Computationally Difficult Set Covering Problem – (pdf) Journal article on this topic detailing a heuristic for finding set coverings

A Genetic Algorithm for the Set Covering Problem – (pdf) Journal article about a genetic programming approach to set cover

A Greedy Heuristic for the Set-Covering Problem – (pdf) Yet another approach

Set Cover Problem — Wikipedia

An Example: Set Cover – (pdf) Scroll down to see a proof of set cover in NP-Complete

============================

These are only a few intractable set problems, but there are many more variations of these out there. Stay tuned for the next segment in this series, problems about Data Storage (Bin Packing and Knapsack).

A Genetic Algorithm Approach to Ramsey Theory

Background and Introduction

Ramsey Theory is the study of combinatorial problems proposed by Frank Ramsey in 1930. The version of his problem as applied to graphs asks, “what is the smallest complete graph such that there is at least one clique of a given size and color?” This can be represented by R(x,y) where x and y are the sizes of the cliques to find for red and blue, respectively.

R(3,3) means to find the smallest graph such that we are guaranteed either a red 3-clique (3 vertices connected together at every point, or a complete graph on 3 vertices) or a blue 3-clique. This has been shown to be 6. We can show this by providing an example that a complete graph on 5 vertices (K_5) can be colored such that it contains no red or blue 3-cliques. Then, we show that for every coloring of K_6, there must necessarily be a red triangle or a blue triangle in the graph. This can be verified computationally or combinatorially. (See the nice example here: Theorem on friends and strangers)

While the problem is relatively easy to solve for a small value like R(3,3), the complexity of the problem increases greatly when considering larger clique sizes. For instance, R(4,4) is 18, and we only know that the value of R(5,5) is somewhere between 43 and 49 inclusive. Because the number of cases one must check increases exponentially with each increase in clique size, it becomes impractical for traditional computing very quickly.

Therefore, we will utilize a genetic algorithm in an attempt to verify and perhaps improve the lower bound of R(5,5). To do this, the program must complete the following:

  • Use the genetic algorithm to test for graphs that have low numbers of cliques

  • Perform exhaustive testing on these graphs, and if we can find even one coloring where a clique does not exist, we will have shown that R(5,5) > 43.

 

Implementation

My algorithm is implemented using Java. Graphs are represented using an adjacency matrix, and a coloring of the graph (ColorMatrix) are 2D boolean arrays. In this scheme, coloring[x][y]==0 represents a blue edge from x to y and coloring[x][y]==1 represents a red edge. The colorings are symmetric; that is, coloring[x][y] == coloring[y][x]. These colorings form a base for my Chromosome class, which is used in the genetic algorithm to represent a “piece” of information that can be mated with other chromosomes, mutated, and scored based on the number of cliques found. A population stores multiple chromosomes.

My basic algorithm works as follows:

  1. Create a population of random colorings of K_{43}
  2. Check a set of edges for 5-cliques (how these sets are determined will be discussed later). Assign a fitness score based on the number of cliques found

  3. Crossover/mutate the chromosomes

  4. Run this many times until every member of the population has fitness 0 (no cliques found in that set of edges)

  5. Take these “possible zeroes” and begin to test more edges to find cliques

    1. if we find any new cliques, this coloring is no longer any good to us (we are looking ultimately for a coloring with no cliques whatsoever)

    2. pare down the population to a set of graphs that have no cliques in set 1 and set 2

    3. eventually this leads to an exhaustive search but by that point there will be only a few “likely” candidates that passed every test set (this saves computation time rather than checking every clique right off the bat)

 

Calculating Fitness

The fitness function for our genetic algorithm checks a set of edges in the graph to see if there are any cliques. We test from “test sets” instead of checking every clique every time. The reasoning behind this is that if a clique is found on the first test, we need not go any farther. The multiple-stage fitness testing allows us to prune out “bad” data and get more likely solutions to our problem. The first test set data is computed as follows (each 5-tuple represents a set of five edges to check for a clique — that is, all the same color)

(0,1,2,3,4), (0,2,3,4,5), (0,3,4,5,6),

\dots (0,39,40,41,42), (1,2,3,4,5), (1,3,4,5,6),

\dots, (38,39,40,41,42)

 

We use a simple method of counting up iteratively the first two items in the list, then using increasing consecutive numbers for the rest of the clique. This way, lots of cliques are tested, but this test set leaves out cliques that are not formed from consecutive edges (e.g., (0,2,4,6,8) would not be tested in this list). Once the list of edges to check is generated, we simply test the edges on each node in the set. If the colors are all the same, then we have a clique and add to our fitness.

The second test set would be applied after the population has converged for the first time to zeroes, and this time counts up by multiples of 2 (so (0, 2, 4, 6, 8) would be a viable choice in this set).

An alternative fitness function was considered that involves inserting vertices into a binary search tree based on their coloring values. To do this, first one creates a random permutation of edges (values 0-42). Then using the colorings (0 and 1 instead of positive and negative) we can insert the nodes either left or right in a binary search tree with the first node in the list as the root. By looking at the output of this tree, we can get a visual representation of color patterns. The idea is that by finding a long “path” in one of these trees we know that it has seen the same color many times, thus suggesting a clique exists.

In the below example, the set (0,1,2,3,4) forms a long “path”, meaning that each edge is the same color.

Figure 1. Binary search tree built from a sample coloring of (0,1,2,3,4,5)

Unfortunately, this implementation is not currently functional in my program.

 

Parent Selection and Crossover

The backbone of any genetic algorithm is the crossover. This models the mating process in traditional biology and is used to make the Chromosomes (solutions) improve themselves over many generations by keeping successful traits and discarding others. Parents are selected in my model randomly from the population, but there is a small chance (about 5%) that the best member of the population would be chosen as a parent instead. This provides a slight bias toward better traits but still allows for plenty of randomness and new data.

The crossover is a simple one. Using two Chromosomes (colorings of the graph) we use a random value for each x, y in the colorMatrix to determine whether the “mom” or the “pop” will contribute their genes (coloring[x][y]). There is a 50-50 chance of inserting mom(x,y) into the baby as opposed to pop(x,y). The “baby” Chromosome results in a mixture of data from both parents, and if the new Chromosome’s fitness (number of cliques) is lower than the current worst value in the population, we replace that value with our new baby. This allows the population to improve over time.

Our mutation method simply flips a bit in the coloring based on a random value. The mutation rate was 0.08, or 8%.

 

Problems and Solutions

One of the things I quickly noticed when building this project was that the fitness function, although it does not brute force every possible clique check, can still take a prohibitively long time at higher population sizes. The getFitness() function is called a bit too liberally, and saving the fitness value to retrieve later would improve the efficiency in this area.

Another downfall is that since we are only testing a limited set of cliques each time, it is possible (although desirable, given the current approach) to “learn the data”, and evolve a solution that contains no cliques in the tested area, but may have cliques elsewhere. While this seems like a problem at first glance, we can use this to our advantage because having the algorithm “learn” each test data set is what allows us to prune the search space and find possible colorings that do not contain cliques. If we decided this was an undesirable behavior however, we could change the fitness function to include x random permutations of edges each time. The problem with this method is that we could never be sure that a clique certainly did not exist in a certain area over multiple rounds, as the sets tested would be randomized each time. Finally, it is possible to go with the old-fashioned approach and simply check every clique after all.

 

Future Work

While this program currently provides a proof of concept and implementation up to the first round of test data, there is much room for expanding this project. First of all, multiple rounds need to be completed within the algorithm, so that we can generate colorings that are more likely to have no cliques. Other things that would be useful are outputting data to a text file for further reading/analysis, and performing general optimizations throughout my code for performance.

In addition, I would really like to get a working implementation of the Alternate Fitness function implemented and working. I feel like this is an interesting direction to take the project and it would be helpful to have a more visual representation of where cliques are occurring.

While the population initialization currently creates a population of random colorings, it would be interesting to see what happens when we change the ratios of red edges to blue edges in the chromosomes, starting at initialization. What if each coloring was required to be within a certain amount red and a certain amount blue? This is definitely something to check into as well.

 

Conclusion

Currently, we are able to generate lists of colorings that do not have cliques within the first test set of edges. As we add on more edge sets to test, we will create more constrained lists. Further testing will reveal if any of these “possible” solutions truly do not contain such cliques. For this, an exhaustive search is the only way. Although it is on the order of 900,000 edge combinations ({43 \choose 5} = 962598) to check, this is time consuming but doable, especially if we have a relatively small set of graphs to check.

Finally, the code is available on https://github.com/nelsonam/ramsey if anyone is interested. Share your comments in the box below and if you have any questions feel free to ask!

Holiday Maths Problem #2 – Solution

So back in my previous post I discussed the problem of the Traveling Santa. He wants to deliver gifts as quickly as possible to seven different towns. At its essence, this is a Traveling Salesman Problem, a well known and very difficult problem in graph theory and combinatorial optimization. I decided to take my own crack at the problem and see if I could devise a solution.

I wrote a program to tell me the shortest Hamiltonian cycle through the 7 towns. I used the networkx library to set up my graphs and then generated a list of all the Hamiltonian cycles. Then using the weights provided by each edge I calculated the total distance for each of the paths. After that, I was able to narrow it down to the shortest ones.

One solution that my program found was the cycle that visits [7, 4, 2, 1, 5, 6, 3, 7] with a distance of 39. You can check out the code here, and I’d be interested to hear about your ideas or alternative implementations.