## Beeminder and Self-Tracking: Five Months In

So back in January, I started using a little service called Beeminder to track my goals and encourage me to do more things I’ve been putting off. I also experimented with several other Quantified Self tools, and learned a lot about  how simply tracking what we do day to day can open our eyes to things we need to improve at and things we perhaps do too much of (for me, its instant messaging with my friends). It’s been five months since I started this journey of learning about myself, and I thought I’d give an update about how its been going.

My findings: I couldn’t be happier with the results.

I know its easy to get started on a new diet or workout program and feel all motivated and enthusiastic at first. That’s how New Year’s resolutions start, after all. We get all excited about changing our lives and then a few days or weeks later, its back to the same old same old. When I started tracking myself and using Beeminder for my goals, I was afraid a similar thing would happen. It’s just a fad, I thought. My newest obsession. It will subside in a few weeks. I’m happy to say that five months later, I am still going strong and I have made  a lot of progress.

Wait, what is all this Beeminder/Quantified Self stuff? What are you talking about?

Glad you asked. I did a post on all this stuff a few months ago called Quantifying the Self and discussed the nuts and bolts of all this, but here’s a quick review:

Beeminder is a web service that allows you to track your goals by plotting progress points on a graph each day. You try to stay on “the yellow brick road” (that is, the goal you set for yourself — say, doing 20 pushups a day) and if you fall off the road, in order to recommit and get back on the road you pledge money (you’re basically “betting” that you will be able to keep up with your goal). If you fall off the road again, you pay up. Here’s a picture:

As you can see, the data points are staying above the yellow line, meaning I am on track for my goal. This screenshot was taken back in February, after I had been tracking my reading habits for only a month. Let’s take a look at my graph now:

There are a lot more data points, and I’ve racked up over 2000 minutes read (that’s 9 books) since January! There are a few flat parts (in March I did a lot of traveling and didn’t find a lot of time to read), and because of this I actually derailed once. But I pledged my $5 and so far I’ve been doing well once again. I really enjoy seeing the graph and the data points grow, and the Android App has been absolutely essential to my success. It doesn’t get much easier than entering in a few numbers into the app every day. Of course, this is just my most successful graph. I have others that I have utterly failed at, like flossing my teeth: This one…has a lot more flatlining. A product of good old akrasia. As you can see, I’ve derailed on this one and have been for quite some time…I think I’m about ready to pony up the cash and try again though (for real this time!) Some people have told me that paying the service money when you fail your goals seems cruel or manipulative on their part, but I don’t see it that way at all. This is something want to do, and if I don’t follow through with it, then there has to be some sort of pain associated with that to deter me from failing again. And besides, this company has done so much for me and I use the service so much I am happy to give them a few bucks here and there. The staff is great and responsive and the website is always getting updates. They deserve it. In addition, I’ve found from my pedometer app on my phone that I’ve walked over 600,000 steps this year! That’s over 300 miles. The day to day walking may not seem like much, but adding it all up like that certainly has a big impact on me and motivates me to keep going and do more. My most recent project involves taming the beast that is my email inbox. I’ll admit it: I’m an email hoarder. While other people keep their inboxes neat, tidy, and organized, I have a giant deluge of thousands of emails just sitting there, taking up space and making it impossible to find anything. Usually, if I don’t find an email interesting (such as an advertisement or a newsletter) I won’t even click on it. Thus, there are hundreds of unread emails as well. This is a product of my laziness over time, and it just keeps getting worse. I’ve decided its finally time to do something about it. I set up a Beeminder goal to track my inbox size, and I’m going to either file away or delete 500 emails per week until my inbox reaches the fabled Inbox Zero. Once I reach that milestone, the challenge will be to keep it there, and keep my email inbox manageable and not overflowing like it was before. This leads to less stress, easier location of important emails, and if something is in the inbox, it means I need to deal with it right away. I feel like this will give great gains in productivity and overall happiness. And just in the interest of transparency, my goals are all publicly viewable on my beeminder page, and that makes me not accountable to just one person, but the internet at large when you display your successes and failures in this way. There’s something strangely motivational about that. What have I learned? As for some of the other services, they didn’t stick quite as well. But the trial period of testing out these new things definitely taught me a lot about how my mind works and how we can battle this beast of distraction and bad judgement that rears its ugly head daily. Humans aren’t necessarily rational creatures by nature, but we can learn what mistakes we make and how our minds try to trick us. Then we can trick it right back. I just bought the book Thinking, Fast and Slow by Daniel Kahneman, and it discusses at length these “cognitive biases” and ways to get around them. The result? A happier, healthier, more productive life. Final Thoughts If you’re curious about what self-tracking can do for you, I urge you to try tracking a simple thing in your life, something you already do, for just a week. It becomes like a game to try to “beat your best score” and its really fulfilling to see your progress over time. I really recommend trying this to anyone that is interested in achieving their goals and improving their life. It’s surely changed mine. ## Introduction to Genetic Algorithms In response to my previous article about genetic algorithms for Ramsey theory, a few readers asked me to give a bit more of an introduction about genetic algorithms. As such, here you will find a beginner’s look at what a genetic algorithm is, what it is useful for, and how you can use one in your own work. What is a genetic algorithm (GA)? To begin with, let’s talk about what a genetic algorithm is, on its most basic level. The word genetic suggests something to do with biology, and if you can reach way back when in your high school science classes you might remember the basic genetic process: plants and animals are born, mate with one another, and create new generations. Over time, these generations tend to emphasize certain traits essential to survival, while downplaying the weaker (recessive) traits. A genetic algorithm works much the same way. We come up with a population of possible solutions to a problem, “mate” them together, and look at our new solutions to see if they are any better. Over time, we can create solutions that converge to better and better values. This is useful when a problem is too complex to search all possibilities. Below you will see an image describing how a simple GA works. Crossovers, fitness, mutation, oh my! Before moving on, it would be useful to define some of the terms used above. • chromosome: a proposed “solution” to the problem at hand. This is usually represented by a bitstring, that is to say, a list of 0′s and 1′s. It can also hold any other information that is crucial to our problem. • population: a collection of these “chromosomes” that we use to combine together and make new generations. The population represents the set of all the ideas that we have about this problem at the moment • fitness: the fitness of a chromosome is a number representing how good it is. That is, the fitness represents how good this solution is at solving our problem. Are there glaring errors in this design? It will show through in the fitness calculation. Or maybe this solution is a really good one, after all, in which case it would have a good fitness. An example of this is say you were trying to find the shortest route to get somewhere. Fitness for a problem like this would be a number representing the distance it takes for each path you could choose. In the end, you want to find the path with the shortest distance (and the lowest fitness). • mutation: mutation is the random entering of new data into the gene pool. Just like in biology, sometimes mutations occur and create things that were never intended. However, sometimes this inadvertent change can be to our advantage if we’re getting stuck. A common example of a mutation is to just change a small part of the chromosome, and move on. • crossover: the “crossover function” represents the operations that we do in order to “mate” two (or sometimes more) of our chromosomes. There are many different types of crossover techniques, some better for certain situations than others, but at the heart of it a crossover just represents a way to combine the “traits” of two or more chromosomes into a new “baby” chromosome to be inserted into our population. This is the bulk of a genetic algorithms’ work, because as the population evolves and new generations of solutions are created, the goal is to keep around the solutions with “good” fitness, and get rid of the chromosomes that aren’t doing so well. • termination condition: all good things must come to an end. While we in theory could just leave our algorithm running forever, that would not be very helpful because as I will discuss later in this post, solutions can’t keep getting better forever. In addition, sometimes you won’t actually get to the “best solution” and instead will converge on what’s called a “local minima/maxima”. When this happens, it means you’ve found an “okay” solution, but the population got flooded with many similar chromosomes and couldn’t really improve itself after that point. Think of it this way: if everyone in the world were exactly the same, would you expect any different from their children? Common terminating conditions are: • a) finding the best solution (ideal) • b) running a preset number of generations and using that as a cutoff point • c) quitting after every member of the population falls within a certain similarity range (this means that no new/better solutions are likely to be produced) But why would I want to do all this? I know this seems like a pretty complicated process. Why not just use a computer to figure out the real solution instead of dancing around it in this complicated manner? I hear you saying. Well, it turns out that’s not always possible… Enter the Traveling Salesman Problem (TSP). While there are many problems that are still very hard for us to solve with computers, this is one of the best known and most studied, and it turns out that using a genetic algorithm is actually a pretty good approach and is much quicker than running an exhaustive search of all paths possible. Remember when I used the example of finding the shortest distance to go somewhere? That’s pretty much what this is. I’ve done a blog post on this before (see here). In the traveling salesman problem, there is a man that needs to visit a list of different cities on his route, but he wants to get there and back as quickly as possible. Therefore, you need to find the shortest route to hit all the cities and return home. This isn’t always as easy as it sounds, and as the number of cities grows, so too does the time it takes to find the right one. Very quickly it becomes implausible to check every possible path, so we use a genetic algorithm to help us weed out the bad ones. Another example of how we would use a genetic algorithm is for graph theory problems that also have a huge number of possible solutions. You can see how I applied a genetic algorithm to the problem of Ramsey numbers here. Okay, so how do I use this in my project? It’s pretty simple. I know, I know, it looks complicated, but once you get everything apportioned out correctly its not that bad. Things to think about when applying a GA to a hard problem (we will use the Traveling Salesman Problem as an example here): • First decide what your “chromosomes” will look like. These are the meat of your population, and these are what will be mutated and crossovered in order to create better solutions. For the TSP, we will use a a sequence of numbers denoting what cities to visit and in what order. (For example, “1 5 4 2 3″ describes a way you could make a circuit through 5 cities). • Decide on how fitness will be evaluated, this is important, because members with better fitness will be the ones that stick around in your population and (hopefully) make your solutions better. In our problem, fitness refers to how far our chosen path takes us. The lower the fitness score, the shorter our travel is. We want to minimize this number with our algorithm. • Next we need to figure out what our crossover will be. This is very important to consider, because we want something that will take parts of both of our “parents” and combine them together in some way to make a “baby”. For traveling salesman, we can’t just grab some from one parent and some from the other, because we run the risk of duplicates. (we don’t want a path to look like “1 2 2 3 4″, city 2 is visited twice and we never get to city 5). Therefore, we have to use more sophisticated methods. I won’t go into them here, but if you’re interested check out this wikipedia page for more info on crossover techniques. What are the downsides? “Every rose has its thorn,” as they say, and genetic algorithms are no different. What seems like a cure-all for all our intractable problem needs can have hindrances as well. As I mentioned before, GAs have a problem of converging too early to a “not quite value”. Often this takes many repetitions of running the program and fine tuning things such as population size and mutation rate. Sometimes happening upon the “best” solution is a product of randomness. But in a GA, we use this randomness to our advantage as much as we can. Another inherent problem is the fact that we have to program all this framework around it. Using a GA is only viable if its an extremely complex problem that cannot be solved more efficiently. For example, using a GA to solve a 5 city TSP would be foolish. We can run through all those possibilities very quickly. On the other hand, bumping that up to 50 or 500 cities proves a much harder challenge. In addition, using a genetic algorithm means you have to find a type of chromosome, fitness function, population size, and crossover that works for you. Pick the wrong values for these, and the program will behave less than optimally. Even in some genetic algorithms, the fitness function for even one chromosome can take quite a while to compute, and as such it also becomes very slow to use on highly computational problems. Where can I learn more? I hope this has covered the basics of genetic algorithms and interested you in learning more. If you would like to see a project I have done involving GAs you can read A Genetic Algorithm Approach to Ramsey Theory, and for a broader range of discussion about the theory and applications of GAs check out the book An Introduction To Genetic Algorithms by Melanie Mitchell. Hope you enjoyed this introduction to the wide world of genetic programming; if you have questions or suggestions please leave them in the comments below! ## Kinect Face Tracking — Results: Thesis Update #4 For background, see my three previous posts in this series: My thesis has been successfully completed and defended now, and I am currently on break for the summer. I thought I would make a post to wrap up some loose ends and talk about some things I didn’t have a chance to talk about before. In my last post, I discussed the significance of animation units to my facial tracking algorithm. Now the way I use these units is pretty simple, but can lead to some complex classifications. The first thing to do is to consider what the desired face would look like. Picture it in the mind’s eye. Say for example we wanted to recognized a surprised face. How would this look? Chances are, when thinking of a surprised face, the mind visualizes raised eyebrows, wide open eyes, and a gaping mouth. The next task (and the bulk of my algorithm) asks: How do we translate these facial movements into the points tracked on our facial mesh? We use animation units as “wrappers” for sets of points on the face plane. Instead of having to track and check multiple different places on the eyebrows or mouth for example, the animation units allow us to track the movements of those features as a whole. Since the animation unit values lie between -1 and 1, we must assign “bounds” for each expression to where if the user’s features fall within that range, we can assume the user is creating that expression. These values at present are determined by extensive testing and seeing what values are frequently displayed when creating a certain expression. It would not be difficult to build a classifier for these expression bounds, and use it to train the program over multiple different faces and expressions in order to get the best and most accurate data for each type of face. In my application, we track six different types of expressions. surprised sad kissing smiling In addition, we look for two different types of angry faces: angry with a closed mouth (glaring at someone) or angry with an open mouth (as in yelling). angry – closed mouth angry – open mouth To see a simple flowchart detailing some preliminary bounds for each expression (not exhaustive), check out the chart here (click for larger view). There is a bit of a “lag” in my application on recognizing these expressions, because the live video stream captures many frames each second, and there is a tiny bit of delay in figuring out what expression that frame’s data fits into. As such, the output of my program is a bit inaccurate still. Because it prints off what the expression every frame, there can be a bit of a buildup and after a while it will start showing expressions at a bit of a delay. For example, if the user acts surprised sometimes the program will not actually print “surprised” for a fraction of a second afterwards, because its busy trying to run through the frames as they come in. A simple remedy to this would be to create a “buffer” of tracked points and use the average of the data over a few seconds in order to determine the facial expression. Because the camera is very sensitive, we are prone to having the data change at the slightest movement of the face. Indeed, even trying to sit as still as possible still results in some small changes in the data. Another thing I noticed that creating a buffer of data could help solve is when the camera loses track of the face for only a moment, it begins to spit out garbage data as it attempts to relocate the face. Overall we can see a good proof of concept of the capabilities of the Kinect Face Tracking API and there is a lot of room for improvement in the future. Possible future additions/enhancements include: • tracking a wider range of expressions • wider range of accessibility (glasses/hats, children, elderly people) • more specific bounds for facial expressions, use neural networks or something • more interactive application • use facial expression recognition to interface with other environments (i.e., call a command by smiling) ## A Genetic Algorithm Approach to Ramsey Theory Background and Introduction Ramsey Theory is the study of combinatorial problems proposed by Frank Ramsey in 1930. The version of his problem as applied to graphs asks, “what is the smallest complete graph such that there is at least one clique of a given size and color?” This can be represented by $R(x,y)$ where x and y are the sizes of the cliques to find for red and blue, respectively. $R(3,3)$ means to find the smallest graph such that we are guaranteed either a red 3-clique (3 vertices connected together at every point, or a complete graph on 3 vertices) or a blue 3-clique. This has been shown to be 6. We can show this by providing an example that a complete graph on 5 vertices ($K_5$) can be colored such that it contains no red or blue 3-cliques. Then, we show that for every coloring of $K_6$, there must necessarily be a red triangle or a blue triangle in the graph. This can be verified computationally or combinatorially. (See the nice example here: Theorem on friends and strangers) While the problem is relatively easy to solve for a small value like $R(3,3)$, the complexity of the problem increases greatly when considering larger clique sizes. For instance, $R(4,4)$ is 18, and we only know that the value of $R(5,5)$ is somewhere between 43 and 49 inclusive. Because the number of cases one must check increases exponentially with each increase in clique size, it becomes impractical for traditional computing very quickly. Therefore, we will utilize a genetic algorithm in an attempt to verify and perhaps improve the lower bound of $R(5,5)$. To do this, the program must complete the following: • Use the genetic algorithm to test for graphs that have low numbers of cliques • Perform exhaustive testing on these graphs, and if we can find even one coloring where a clique does not exist, we will have shown that $R(5,5) > 43$. Implementation My algorithm is implemented using Java. Graphs are represented using an adjacency matrix, and a coloring of the graph (ColorMatrix) are 2D boolean arrays. In this scheme, coloring[x][y]==0 represents a blue edge from x to y and coloring[x][y]==1 represents a red edge. The colorings are symmetric; that is, coloring[x][y] == coloring[y][x]. These colorings form a base for my Chromosome class, which is used in the genetic algorithm to represent a “piece” of information that can be mated with other chromosomes, mutated, and scored based on the number of cliques found. A population stores multiple chromosomes. My basic algorithm works as follows: 1. Create a population of random colorings of $K_{43}$ 2. Check a set of edges for 5-cliques (how these sets are determined will be discussed later). Assign a fitness score based on the number of cliques found 3. Crossover/mutate the chromosomes 4. Run this many times until every member of the population has fitness 0 (no cliques found in that set of edges) 5. Take these “possible zeroes” and begin to test more edges to find cliques 1. if we find any new cliques, this coloring is no longer any good to us (we are looking ultimately for a coloring with no cliques whatsoever) 2. pare down the population to a set of graphs that have no cliques in set 1 and set 2 3. eventually this leads to an exhaustive search but by that point there will be only a few “likely” candidates that passed every test set (this saves computation time rather than checking every clique right off the bat) Calculating Fitness The fitness function for our genetic algorithm checks a set of edges in the graph to see if there are any cliques. We test from “test sets” instead of checking every clique every time. The reasoning behind this is that if a clique is found on the first test, we need not go any farther. The multiple-stage fitness testing allows us to prune out “bad” data and get more likely solutions to our problem. The first test set data is computed as follows (each 5-tuple represents a set of five edges to check for a clique — that is, all the same color) $(0,1,2,3,4), (0,2,3,4,5), (0,3,4,5,6),$ $\dots (0,39,40,41,42), (1,2,3,4,5), (1,3,4,5,6),$ $\dots, (38,39,40,41,42)$ We use a simple method of counting up iteratively the first two items in the list, then using increasing consecutive numbers for the rest of the clique. This way, lots of cliques are tested, but this test set leaves out cliques that are not formed from consecutive edges (e.g., $(0,2,4,6,8)$ would not be tested in this list). Once the list of edges to check is generated, we simply test the edges on each node in the set. If the colors are all the same, then we have a clique and add to our fitness. The second test set would be applied after the population has converged for the first time to zeroes, and this time counts up by multiples of 2 (so $(0, 2, 4, 6, 8)$ would be a viable choice in this set). An alternative fitness function was considered that involves inserting vertices into a binary search tree based on their coloring values. To do this, first one creates a random permutation of edges (values 0-42). Then using the colorings (0 and 1 instead of positive and negative) we can insert the nodes either left or right in a binary search tree with the first node in the list as the root. By looking at the output of this tree, we can get a visual representation of color patterns. The idea is that by finding a long “path” in one of these trees we know that it has seen the same color many times, thus suggesting a clique exists. In the below example, the set $(0,1,2,3,4)$ forms a long “path”, meaning that each edge is the same color. Unfortunately, this implementation is not currently functional in my program. Parent Selection and Crossover The backbone of any genetic algorithm is the crossover. This models the mating process in traditional biology and is used to make the Chromosomes (solutions) improve themselves over many generations by keeping successful traits and discarding others. Parents are selected in my model randomly from the population, but there is a small chance (about 5%) that the best member of the population would be chosen as a parent instead. This provides a slight bias toward better traits but still allows for plenty of randomness and new data. The crossover is a simple one. Using two Chromosomes (colorings of the graph) we use a random value for each x, y in the colorMatrix to determine whether the “mom” or the “pop” will contribute their genes (coloring[x][y]). There is a 50-50 chance of inserting mom(x,y) into the baby as opposed to pop(x,y). The “baby” Chromosome results in a mixture of data from both parents, and if the new Chromosome’s fitness (number of cliques) is lower than the current worst value in the population, we replace that value with our new baby. This allows the population to improve over time. Our mutation method simply flips a bit in the coloring based on a random value. The mutation rate was 0.08, or 8%. Problems and Solutions One of the things I quickly noticed when building this project was that the fitness function, although it does not brute force every possible clique check, can still take a prohibitively long time at higher population sizes. The getFitness() function is called a bit too liberally, and saving the fitness value to retrieve later would improve the efficiency in this area. Another downfall is that since we are only testing a limited set of cliques each time, it is possible (although desirable, given the current approach) to “learn the data”, and evolve a solution that contains no cliques in the tested area, but may have cliques elsewhere. While this seems like a problem at first glance, we can use this to our advantage because having the algorithm “learn” each test data set is what allows us to prune the search space and find possible colorings that do not contain cliques. If we decided this was an undesirable behavior however, we could change the fitness function to include x random permutations of edges each time. The problem with this method is that we could never be sure that a clique certainly did not exist in a certain area over multiple rounds, as the sets tested would be randomized each time. Finally, it is possible to go with the old-fashioned approach and simply check every clique after all. Future Work While this program currently provides a proof of concept and implementation up to the first round of test data, there is much room for expanding this project. First of all, multiple rounds need to be completed within the algorithm, so that we can generate colorings that are more likely to have no cliques. Other things that would be useful are outputting data to a text file for further reading/analysis, and performing general optimizations throughout my code for performance. In addition, I would really like to get a working implementation of the Alternate Fitness function implemented and working. I feel like this is an interesting direction to take the project and it would be helpful to have a more visual representation of where cliques are occurring. While the population initialization currently creates a population of random colorings, it would be interesting to see what happens when we change the ratios of red edges to blue edges in the chromosomes, starting at initialization. What if each coloring was required to be within a certain amount red and a certain amount blue? This is definitely something to check into as well. Conclusion Currently, we are able to generate lists of colorings that do not have cliques within the first test set of edges. As we add on more edge sets to test, we will create more constrained lists. Further testing will reveal if any of these “possible” solutions truly do not contain such cliques. For this, an exhaustive search is the only way. Although it is on the order of 900,000 edge combinations (${43 \choose 5} = 962598$) to check, this is time consuming but doable, especially if we have a relatively small set of graphs to check. Finally, the code is available on https://github.com/nelsonam/ramsey if anyone is interested. Share your comments in the box below and if you have any questions feel free to ask! ## Animation Units for Facial Expression Tracking – Thesis Update #3 See the previous posts in this series: So the past few months, I’ve been hard at work on my thesis concerning facial expression analysis with Microsoft Kinect. Unfortunately my blog has suffered quite a bit in wake of everything I’ve had going on, but I’m trying to post a few new things as the school semester winds down. My ongoing project concerning Facial Expression Analysis with Kinect is making progress, and as the semester winds down, I am trying to prepare a final product to present. As I described in my last thesis update, I had been able to test a sample product which overlays a 3D mesh on the user’s face and tracks the different points on the face. In particular, a subset of these points are called Animation Units, or AUs, and are essential to the expression recognition algorithm. The points are defined based on the Candide3 model and the different values are delineated on Microsoft’s API page about the Kinect Face Tracking SDK. There are six different values that describe movements and placement of the basic facial features such as eyes, eyebrows, and mouth, and the table of values for the AUs is reproduced below (from Microsoft’s Kinect API page):  AU Name and Value AU Value Interpretation AU0 – Upper Lip Raiser 0=neutral, covering teeth 1=showing teeth fully -1=maximal possible pushed down lip AU1 – Jaw Lowerer 0=closed 1=fully open -1=closed, same as 0 AU2 – Lip Stretcher 0=neutral 1=fully stretched (joker’s smile) -0.5=rounded (pout) -1=fully rounded (kissing mouth) AU3 – Brow Lowerer 0=neutral -1=raised almost all the way 1=fully lowered (to the limit of the eyes) AU4 – Lip Corner Depressor 0=neutral -1=very happy smile 1=very sad frown AU5 – Outer Brow Raiser 0=neutral -1=fully lowered as a very sad face 1=raised as in an expression of deep surprise When these values are considered, I was able to determine a set of bounds for what expression would represent in my application. However, I learned quickly that the extremal values of -1 and 1 are just that, extremal. Even sitting in front of the camera making the most exaggerated faces I could, it was very difficult to get above a .5 or .6 in some areas. In addition, the eyebrow data was almost always inaccurate in my testing because I wear glasses and this confused the camera. The Kinect saw the top of my glasses as my eyebrows and thus, showed not very much movement at all. When I took my glasses off, placement and tracking returned to normal, but it was impossible for me to see the data being processed. Another unforseen problem that I ran into was that the camera is perhaps a bit too sensitive, and since it is tracking many frames every second, even if you sit as still as possible there will still be some variation in the tracked points, and if the camera loses track of the face entirely, the points can really go haywire. Therefore, trying to keep the points within specific bounds is more difficult than expected. If the camera loses track of the face the application can say you went from surprised to sad to angry in less than a second. It will probably not get done in this version of the project, but it would be useful in the future to store a buffer of points perhaps every second or so so that it stabilizes the constantly changing data and provides more time for processing before changing frames completely. Overall I’ve certainly learned a lot throughout the course of this project and I would like to continue to work on it more throughout the summer. I will post a few more thoughts on the process and results as I carry on and wrap things up. I’ll have a copy of my thesis online at some point too. In other news, I’m going to be starting the Master’s program in Computer Science at Appalachian State University in the fall, and I’m pretty excited about it. I plan to do research on algorithms and graph theory. Stay tuned… UPDATE: See the next post in this series here: Kinect Face Tracking — Results : Thesis Update #4 ## Adding Primes to Produce a Prime Inspiration for this problem comes from my good friend Johnny Aceti. Consider numbers $x,y \in \mathcal{P}$ where $\mathcal{P}$ is the set of all prime numbers $\in \mathbb{N}$. Conjecture 1. There exist consecutive $x,y \in \mathcal{P}$ such that $x+y \in \mathcal{P}$. Is this true? Well lets see. Every prime except 2 is odd. Let’s first look at adding together 2 odd primes. Odd numbers are equivalent to $1 \mod 2$, and if one adds together two numbers that are $1 \mod 2$, you get a sum that is $0 \mod 2$. This means the sum is even, and since we can’t add two primes to get 2, this means that you cannot add an even number of odd primes to produce a prime. This takes out just about every possibility for this conjecture to be true, but we haven’t considered 2 yet, since before we were only working with odd primes. Let’s try the case where we use 2 as $x$. $2 \equiv 0 \mod 2$ and every other odd prime is $1 \mod 2$. Therefore, if we added them, we could get a sum that is $1\mod 2$. So far so good, this means our sum is odd. But is it prime? The only instance involving 2 if $x, y$ have to be consecutive is $2+3=5$. 5 is prime, and that becomes the only $x,y$ that satisfy Conjecture 1. But what if $x,y$ didn’t have to be consecutive? Does the use of 2 still work? Well, the short answer is sometimes. $\boldsymbol{2+3=5}$ $\boldsymbol{2+5=7}$ $2+7=9$ Uh oh. 9 isn’t prime. Therefore, the use of 2 as $x$ or $y$ only works sometimes. This problem isn’t very interesting with only two variables though. Let’s add more… $***************************$ Consider consecutive numbers $x,y,z \in \mathcal{P}$ where $\mathcal{P}$ is the set of all prime numbers $\in \mathbb{N}$. $2+3+5=10$ $3+5+7=15$ $\boldsymbol{5+7+11=23}$ $\boldsymbol{7+11+13=31}$ $\boldsymbol{11+13+17=41}$ $13+17+19=49$ $\boldsymbol{17+19+23=59}$ $\vdots$ If we continue adding numbers in this way, how many times will the resulting sum be a prime? Is there any way to predict what triples of numbers produce primes and which do not? Here are some more examples (all of which have prime sums): $19 + 23 + 29 = 71$ $23 + 29 + 31 = 83$ $29 + 31 + 37 = 97$ $31 + 37 + 41 = 109$ $41 + 43 + 47 = 131$ $53 + 59 + 61 = 173$ $61 + 67 + 71 = 199$ $67 + 71 + 73 = 211$ $71 + 73 + 79 = 223$ $79 + 83 + 89 = 251$ $83 + 89 + 97 = 269$ $101 + 103 + 107 = 311$ $109 + 113 + 127 = 349$ $139 + 149 + 151 = 439$ $149 + 151 + 157 = 457$ $157 + 163 + 167 = 487$ $163 + 167 + 173 = 503$ $197 + 199 + 211 = 607$ Can we derive a pattern or a formula such that $x, y, z \in \mathcal{P}$ always holds? Published in: on February 20, 2013 at 19:20 Leave a Comment Tags: , , , , , , ## The ABCs of Computer Science So over the next few months or so, I’d like to write a series of educational posts on “The ABCs of Computer Science”. In each post, I would talk about a topic that begins with the letter of the alphabet I am on. This will not only serve as an educational resource for people wanting to learn new things or get an overview of some of the big topics in computer science, but also as a learning experiment for me. I plan to make an ABCs post at least every two weeks, and my regular posts about thesis updates and mathematical curiosities will continue. I have a tentative list of topics but I’d like your input. What topics would you like to see me write about? What’s something you are interested to learn? What can I use for elusive letters such as Q, X, and Z? Leave your ideas in the comments below and I will do my best to incorporate them. Published in: on February 20, 2013 at 12:00 Comments (2) Tags: , , , , , , , , , ## Some Faces: Thesis Update #2 See the previous post in this series: Facial Expression Analysis With Microsoft Kinect : Thesis Update #1 I’ve been doing some more work on my emotion recognition project and I have a few preliminary pictures to show. The current status of the project is that I can detect faces and see the wire mesh over them, but it’s not exactly as accurate as I would like. For instance, I have noticed that wearing glasses seems to confuse the eye placement. My eyes in the wire mesh are consistently hovering above the top rim of my glasses. If I take them off, this effect is lessened. Another limitation that I noticed is that if someone is wearing a hat it will often not detect the face at all. I brought the equipment to my parent’s house last weekend and let them try it out. My dad always wears a baseball cap, and the Kinect could not recognize his face until he removed it. Below you will see some facial expression “archetypes” that I have developed using my own face. You will notice that “sad” is not included in the list. Because of the inaccurate eye and eyebrow placement, I cannot get it to show the upturned “sad eyebrows” that I wanted. In addition, as much as I frown, it looks like the wire mesh is making a kissing face. Definitely some things to work out but check out the faces I’ve worked on so far. I plan to get multiple people to make faces at the Kinect and see if I can get them into groups that the program can further “learn” with. “Normal face” – a baseline Angry face – note the eyebrows Happy Face – smile! Surprised face – heightened eyebrows and open mouth As you can see, eyes are much better placed without glasses… UPDATE: See the next posts in this series: ## Quantifying the Self So I’ve been doing a bit of an experiment this year. Sure, everyone says they want to do this or do that, lose weight, eat better, exercise more, etc, but how do we keep ourselves to these goals? As you may be aware, the you of the future is always a bit more conscientious than the you of today: “I’ll eat ice cream today and go on the diet tomorrow”, “Just one more day of sleeping in and I’ll get up early tomorrow.” The list goes on. Now being the geek that I am, I began to wonder if there were a more scientific way to go about all this. Something that I’ve found that is pretty easy to do and had a big impact is simply tracking the things you want to do with your time and see how it stacks up over time. I started when I found a website called Beeminder. You can start as many “goals” as you like, such as “go to the gym twice a week” or “floss every day”. You know, those things we want to do but have a hard time actually doing. You go in and plot a data point each day and you can see your progress towards the goal. It has a “yellow brick road” for you to follow and if you do more than average one day you get “safe days” where you don’t have to work as hard. It’s really engaging to me to see my graphs grow. Another main point of Beeminder is the concept of commitment contracts. As far as I know, this is optional, but I can see how it would definitely improve motivation. Have you ever given$20 to a friend and said “I’m going to try to do <insert thing here>. If I succeed give me my money back, but if I don’t you can keep it.” Basically what you can do with Beeminder is “bet” that you will achieve your goal. You go along and plot your points and if at any point you fall below the “yellow brick road” of success, then you have to pay up. Stay on the road? No payment. Another feature to help you stick to it is that for each time you get “off the road”, the penalty increases. The idea is that at some point you think, “wow, I really don’t want to lose x amount of money, I better go to the gym/eat healthier/read more.” Now that may seem like a pretty negative motivational technique, but think about it this way. No one is forcing you to do anything. These are things that you claim you want to do. The phenomenon that causes us to put off things we want to do is called akrasia. It happens when you go to the store intending to buy vegetables and then you see your favorite ice cream on sale. It happens to all of us, and one of the best ways to stop it is to consciously track what you do and hold yourself accountable for it.

As an example, here’s one of my Beeminder graphs for reading more often. I’ve been saying for years I’d like to read more, but I always seem to find other things to do instead. By tracking my reading time each day, I can see my progress over time and its really helped me to stick with it. I started out with a goal of reading 15  minutes each day, but soon bumped it up to 20, and now I’m at 25. I’ve been reading nearly every day and it feels great. I have finished The Alchemist and I’m almost done with The Hobbitwhich is more than I can usually say I’ve read 2 months into the year!

But I started to think, after using Beeminder for a while, what other things can I track? I started using a pedometer to see how many steps I walk at my university daily, and boy was I surprised! I usually walk 3 miles or more in a day just walking around to classes, to eat, to meetings, and to work. The steps add up quick, and its really neat to see what my trends are for walking as well. I haven’t been tracking this one for as long, but here’s a graph I created using google spreadsheets: (guess which data points are the weekends…heh. Of course, the pedometer is on my phone so it only tracks wherever I carry it around, which is not usually within my apartment).

If you’re more interested in tracking your mental fluctuations rather than your physical activities, I uncovered the site Quantified MindIt has a series of experiments where you can track your reaction time, memory, focus, and other basic mental skills. You simply log in and play a few simple games and it gives you scores. There is a wide variety of different activities you can do and I find it pretty fun. I have just started playing around with this site but I imagine if you kept with it and gathered enough data you could determine trends of when your brain is at its best and use that to your advantage. They also have experiments that ask questions like “Does coffee improve cognitive performance?” (tested by doing the games after drinking coffee one day, no coffee the next). Another experiment tests the age old motto of “Never skip breakfast” and asks users to test themselves on days when they have eaten breakfast and days they have not eaten breakfast. They even have one to test the effect that sex has on mental functioning! Finally, if you’re so inclined you can make your own experiments to test out whatever you want.

But why stop there? Some ideas that I have for tracking myself in the future include plotting my going to sleep/wake up times and the time I spend working on my thesis (if you’re curious about that, see here.) I know for me, implementing self tracking into my life has really opened my eyes to a lot of things I do (and don’t do!) and if you’re tired of not meeting your goals or just a huge data nerd like I am, I highly recommend you give this a try. If you have any questions or ideas please leave them in the comments!

## Facial Expression Analysis with Kinect: Thesis Update #1

UPDATE: some things in this project have changed since writing this post, please see the series of posts I wrote on this below:

It’s my final semester of undergraduate work in computer science, and as such I will be completing an undergraduate honors thesis on a topic of my choosing. I plan to post updates here on my progress and things I am working on, both for my own benefit and tracking and also in the hopes that someone else may find my work interesting. The project that I am working on is an extension of a project started by a former graduate student at my university. Using a Kinect sensor and a Bioloid humanoid robot he was able to get the Kinect to track movements of a person standing in front of the robot and then send that information to the robot and have it imitate these movements. If you are interested what this looks like you can find a video here.

What I plan to do with my project is use the Kinect Face Tracking API to analyze facial expressions using the Kinect sensor. The face tracking API tracks a number of points on the face as shown here: (image from msdn)

We can track the angles and positions of each of these points on the face and use them to determine whether the subject has a “happy” face or a “sad” face, among other different expressions. What I plan to do afterwards is to send this data to the Bioloid robot. The robot will then be able to react in a manner appropriate to the expression he sees. For example, if the robot detects a happy face, he may make a clapping motion, while if he sees a sad face, he will react in a different manner. Applications of emotion recognition are springing up across the board. With this project I hope to both learn some new things and have a fun, interactive project at the end.