Adding Primes to Produce a Prime

Inspiration for this problem comes from my good friend Johnny Aceti.

Consider numbers x,y \in \mathcal{P} where \mathcal{P} is the set of all prime numbers \in \mathbb{N}.

Conjecture 1. There exist consecutive x,y \in \mathcal{P} such that x+y \in \mathcal{P}.

Is this true? Well lets see.

Every prime except 2 is odd. Let’s first look at adding together 2 odd primes. Odd numbers are equivalent to 1 \mod 2, and if one adds together two numbers that are 1 \mod 2, you get a sum that is 0 \mod 2. This means the sum is even, and since we can’t add two primes to get 2, this means that you cannot add an even number of odd primes to produce a prime. This takes out just about every possibility for this conjecture to be true, but we haven’t considered 2 yet, since before we were only working with odd primes.

Let’s try the case where we use 2 as x.

2 \equiv 0 \mod 2 and every other odd prime is 1 \mod 2. Therefore, if we added them, we could get a sum that is 1\mod 2. So far so good, this means our sum is odd. But is it prime? The only instance involving 2 if x, y have to be consecutive is 2+3=5. 5 is prime, and that becomes the only x,y that satisfy Conjecture 1.

But what if x,y didn’t have to be consecutive? Does the use of 2 still work?

Well, the short answer is sometimes.

\boldsymbol{2+3=5}
\boldsymbol{2+5=7}
2+7=9

Uh oh. 9 isn’t prime. Therefore, the use of 2 as x or y only works sometimes. This problem isn’t very interesting with only two variables though. Let’s add more…

***************************

Consider consecutive numbers x,y,z \in \mathcal{P} where \mathcal{P} is the set  of all prime numbers \in \mathbb{N}.

2+3+5=10
3+5+7=15
\boldsymbol{5+7+11=23}
\boldsymbol{7+11+13=31}
\boldsymbol{11+13+17=41}
13+17+19=49
\boldsymbol{17+19+23=59}
\vdots

If we continue adding numbers in this way, how many times will the resulting sum be a prime? Is there any way to predict what triples of numbers produce primes and which do not?

Here are some more examples (all of which have prime sums):

19 + 23 + 29 = 71
23 + 29 + 31 = 83
29 + 31 + 37 = 97
31 + 37 + 41 = 109
41 + 43 + 47 = 131
53 + 59 + 61 = 173
61 + 67 + 71 = 199
67 + 71 + 73 = 211
71 + 73 + 79 = 223
79 + 83 + 89 = 251
83 + 89 + 97 = 269
101 + 103 + 107 = 311
109 + 113 + 127 = 349
139 + 149 + 151 = 439
149 + 151 + 157 = 457
157 + 163 + 167 = 487
163 + 167 + 173 = 503
197 + 199 + 211 = 607

Can we derive a pattern or a formula such that x, y, z \in \mathcal{P} always holds?

The ABCs of Computer Science

So over the next few months or so, I’d like to write a series of educational posts on “The ABCs of Computer Science”. In each post, I would talk about a topic that begins with the letter of the alphabet I am on. This will not only serve as an educational resource for people wanting to learn new things or get an overview of some of the big topics in computer science, but also as a learning experiment for me.

I plan to make an ABCs post at least every two weeks, and my regular posts about thesis updates and mathematical curiosities will continue. I have a tentative list of topics but I’d like your input. What topics would you like to see me write about? What’s something you are interested to learn? What can I use for elusive letters such as Q, X, and Z? Leave your ideas in the comments below and I will do my best to incorporate them.

Some Face Tracking With Kinect: Thesis Update #2

See the previous post in this series: Facial Expression Analysis With Microsoft Kinect : Thesis Update #1

I’ve been doing some more work on my emotion recognition project and I have a few preliminary pictures to show. The current status of the project is that I can detect faces and see the wire mesh over them, but it’s not exactly as accurate as I would like. For instance, I have noticed that wearing glasses seems to confuse the eye placement. My eyes in the wire mesh are consistently hovering above the top rim of my glasses. If I take them off, this effect is lessened. Another limitation that I noticed is that if someone is wearing a hat it will often not detect the face at all. I brought the equipment to my parent’s house last weekend and let them try it out. My dad always wears a baseball cap, and the Kinect could not recognize his face until he removed it.

Below you will see some facial expression “archetypes” that I have developed using my own face. You will notice that “sad” is not included in the list. Because of the inaccurate eye and eyebrow placement, I cannot get it to show the upturned “sad eyebrows” that I wanted. In addition, as much as I frown, it looks like the wire mesh is making a kissing face.

Definitely some things to work out but check out the faces I’ve worked on so far. I plan to get multiple people to make faces at the Kinect and see if I can get them into groups that the program can further “learn” with.

normal

“Normal face” – a baseline

Angry face - note the eyebrows

Angry face – note the eyebrows

Happy Face - smile!

Happy Face – smile!

Surprised face - heightened eyebrows and open mouth

Surprised face – heightened eyebrows and open mouth

As you can see, eyes are much better placed without glasses...

As you can see, eyes are much better placed without glasses…

UPDATE: See the next posts in this series:

Quantifying the Self

So I’ve been doing a bit of an experiment this year. Sure, everyone says they want to do this or do that, lose weight, eat better, exercise more, etc, but how do we keep ourselves to these goals? As you may be aware, the you of the future is always a bit more conscientious than the you of today: “I’ll eat  ice cream today and go on the diet tomorrow”, “Just one more day of sleeping in and I’ll get up early tomorrow.” The list goes on. Now being the geek that I am, I began to wonder if there were a more scientific way to go about all this.

Something that I’ve found that is pretty easy to do and had a big impact is simply tracking the things you want to do with your time and see how it stacks up over time. I started when I found a website called Beeminder. You can start as many “goals” as you like, such as “go to the gym twice a week” or “floss every day”. You know, those things we want to do but have a hard time actually doing. You go in and plot a data point each day and you can see your progress towards the goal. It has a “yellow brick road” for you to follow and if you do more than average one day you get “safe days” where you don’t have to work as hard. It’s really engaging to me to see my graphs grow.

Another main point of Beeminder is the concept of commitment contracts. As far as I know, this is optional, but I can see how it would definitely improve motivation. Have you ever given $20 to a friend and said “I’m going to try to do <insert thing here>. If I succeed give me my money back, but if I don’t you can keep it.” Basically what you can do with Beeminder is “bet” that you will achieve your goal. You go along and plot your points and if at any point you fall below the “yellow brick road” of success, then you have to pay up. Stay on the road? No payment. Another feature to help you stick to it is that for each time you get “off the road”, the penalty increases. The idea is that at some point you think, “wow, I really don’t want to lose x amount of money, I better go to the gym/eat healthier/read more.” Now that may seem like a pretty negative motivational technique, but think about it this way. No one is forcing you to do anything. These are things that you claim you want to do. The phenomenon that causes us to put off things we want to do is called akrasia. It happens when you go to the store intending to buy vegetables and then you see your favorite ice cream on sale. It happens to all of us, and one of the best ways to stop it is to consciously track what you do and hold yourself accountable for it.

As an example, here’s one of my Beeminder graphs for reading more often. I’ve been saying for years I’d like to read more, but I always seem to find other things to do instead. By tracking my reading time each day, I can see my progress over time and its really helped me to stick with it. I started out with a goal of reading 15  minutes each day, but soon bumped it up to 20, and now I’m at 25. I’ve been reading nearly every day and it feels great. I have finished The Alchemist and I’m almost done with The Hobbitwhich is more than I can usually say I’ve read 2 months into the year!

readmore

But I started to think, after using Beeminder for a while, what other things can I track? I started using a pedometer to see how many steps I walk at my university daily, and boy was I surprised! I usually walk 3 miles or more in a day just walking around to classes, to eat, to meetings, and to work. The steps add up quick, and its really neat to see what my trends are for walking as well. I haven’t been tracking this one for as long, but here’s a graph I created using google spreadsheets: (guess which data points are the weekends…heh. Of course, the pedometer is on my phone so it only tracks wherever I carry it around, which is not usually within my apartment).

Walking Graph

If you’re more interested in tracking your mental fluctuations rather than your physical activities, I uncovered the site Quantified MindIt has a series of experiments where you can track your reaction time, memory, focus, and other basic mental skills. You simply log in and play a few simple games and it gives you scores. There is a wide variety of different activities you can do and I find it pretty fun. I have just started playing around with this site but I imagine if you kept with it and gathered enough data you could determine trends of when your brain is at its best and use that to your advantage. They also have experiments that ask questions like “Does coffee improve cognitive performance?” (tested by doing the games after drinking coffee one day, no coffee the next). Another experiment tests the age old motto of “Never skip breakfast” and asks users to test themselves on days when they have eaten breakfast and days they have not eaten breakfast. They even have one to test the effect that sex has on mental functioning! Finally, if you’re so inclined you can make your own experiments to test out whatever you want.

But why stop there? Some ideas that I have for tracking myself in the future include plotting my going to sleep/wake up times and the time I spend working on my thesis (if you’re curious about that, see here.) I know for me, implementing self tracking into my life has really opened my eyes to a lot of things I do (and don’t do!) and if you’re tired of not meeting your goals or just a huge data nerd like I am, I highly recommend you give this a try. If you have any questions or ideas please leave them in the comments!

claimtoken-511d413ce9198

Facial Expression Analysis with Kinect: Thesis Update #1

UPDATE: some things in this project have changed since writing this post, please see the series of posts I wrote on this below:

  1. Some Faces : Thesis Update #2
  2. Animation Units for Facial Expression Tracking : Thesis Update #3
  3. Kinect Face Tracking — Results : Thesis Update #4

It’s my final semester of undergraduate work in computer science, and as such I will be completing an undergraduate honors thesis on a topic of my choosing. I plan to post updates here on my progress and things I am working on, both for my own benefit and tracking and also in the hopes that someone else may find my work interesting. The project that I am working on is an extension of a project started by a former graduate student at my university. Using a Kinect sensor and a Bioloid humanoid robot he was able to get the Kinect to track movements of a person standing in front of the robot and then send that information to the robot and have it imitate these movements. If you are interested what this looks like you can find a video here.

What I plan to do with my project is use the Kinect Face Tracking API to analyze facial expressions using the Kinect sensor. The face tracking API tracks a number of points on the face as shown here: (image from msdn)

We can track the angles and positions of each of these points on the face and use them to determine whether the subject has a “happy” face or a “sad” face, among other different expressions. What I plan to do afterwards is to send this data to the Bioloid robot. The robot will then be able to react in a manner appropriate to the expression he sees. For example, if the robot detects a happy face, he may make a clapping motion, while if he sees a sad face, he will react in a different manner. Applications of emotion recognition are springing up across the board. With this project I hope to both learn some new things and have a fun, interactive project at the end.