0

The Secrets of Nim (Dice Nim 1)

Published on Thursday, June 30, 2011 in , , ,

NIM is WIN upside down!(NOTE: Check out the other posts in The Secrets of Nim series.)

Did you think I had run out of versions of Nim yet? Today we look at Nim played with dice.

In versions of Nim with dice, the dice aren't rolled. Instead, the number on a given player's turn is consciously chosen by that player. Also, a running total is kept, starting with the first number chosen by the first player, to which is added the consecutive numbers chosen on consecutive turns.

For example, if the first player starts with a 3, then the running total is 3. If the second player were to then choose a 6, the running total would now be 9, because 3 (first turn) + 6 (second turn) = 9. If the first player now chooses a 1, the running total would now be 10, and so on.

Let's start with the simplest of the dice Nim games, which we'll simply call 50. The object, naturally, is to be the first person to make the running total reach 50. If you think of this as a more traditional Nim game with objects like coins or matches, it's like you're allowed to put anywhere from 1 to 6 objects on the table on each turn, and the winner is the person who puts the 50th object down on the table (only this way, the players don't have to keep counting the objects on the table).

So, this would be a standard Nim game, instead of a Misère Nim game, as discussed in part 1 of this series. But what is the strategy here?

If you either try the game out, or understand the principles in the part 1 post, or you simply cheated and ran right to the Single-Pile Nim tab of the Nim Strategy Calculator, it's not hard to see that you should play to get to the one of the following key numbers as soon as possible, and then play to get to the rest: 1, 8, 15, 22, 29, 36, 43, and 50.

Why these numbers? First, they're all 7 apart, which means one player can't move from one key number to the next in a single move. To go from one key number to the next takes at least 2 moves, which means the player going to these key numbers can lock the other player out of these key numbers.

Note that all the key numbers are also 1 greater than a multiple of 7 (36 is 1 greater than 35, which is 7 times 5, etc.). Why is this? It's because the winning total, 50, is 1 greater than a multiple of 7. If the desired total were 51, all your key numbers would be 2 above multiples of 7 (2, 9, 16, 23, 30, 37, 44, and 51).

To see how this changes for a different game, check out this 100 version on Scam School. You can choose numbers from 1 to 10, so how far apart are the key numbers going to be? Correct - They'll be 11 numbers apart. Since 100 is 1 above a multiple of 11, all the key numbers will be 1 above a multiple of 11: 1, 12, 23, 34, 45, 56, 67, 78, 89, and 100. If you like, you can think of this as the same as out first game, but with 10-sided dice (Didn't need to click to see what 10-sided die looks like? HA! RPG Geek test!).

Getting back to 6-sided dice, let's throw a wrench into the works. This time, we'll play to a total of 31 (or to be the first person to force the other player to go OVER 31). However, in this version, you can't just choose any number. The first player rolls their die (or can choose the starting number), but after that, each player can only rotate the die a quarter turn (in other words, 90 degrees in any direction)! What's the best strategy now?

This presents 2 challenges. With a 1 on top, you can't choose a 1 for yourself. Also, since opposite sides of a die equal 7, there's a 6 on the bottom, and you can't choose that, either. For any given turn, you only can only choose from 4 of the 6 numbers, and the available numbers change with the number already on top!

To examine the strategy more closely, you'll need to be clear about the concept of digital roots. Here's a video that explains them clearly:



Surprisingly, our 31 dice game is based entirely on digital roots! Since the digital root of 31 is 4 (31 → 3 + 1 = 4). So, your key numbers for this game are all going to be 4, 13, 22, and 31.

At this point, you're probably a little confused. All those numbers are 9 apart, and you can only choose from numbers in the range of 1 to 6. Worse yet, you can only choose from 4 numbers in that range, and which 4 you can choose keep changing!

To compensate for this, you're going to need to deal with a secondary set of key numbers. This is something new and unusual for Nim, but it's necessary here.

What are these secondary key numbers? These secondary key numbers are 1, 5, and 9, as well as numbers with those as digital roots. Besides 1, 5, and 9, there's also 10, 14, 18, 19, 23, and so on as key numbers. There's a caveat, however. You should only change the total to one of these if you can do so only by adding a 3 or a 4.

Why is that? If you get to, say, 9 by turning to a 4, you've take 4 AND 3 out of play for the next person. Even better, when the other player makes their move, they have to put the 3 and the 4 back into play.

Even this won't cover every possible move, so there's one other secondary key you need to remember. As a last resort when no other moves are possible, get to a number with a digital root of 8, but only if you can do so with a 2 or a 5.

In short, here's the winning strategy for 31:

  • Ask yourself: “Can I rotate the die so as to get to a running total with a digital root of 4 (4, 13, 22, or 31)?” If so, do it! If not...
  • ...Ask yourself, “Could I rotate the die to a 3 or a 4, so as to get to a running total with a digital root of 1, 5, or 9 (1, 5, 9, 10, 14, 18, 19, 23, 27, or 28)?” If so, do it! If not...
  • ...rotate the die to a 2 or a 5, so as to obtain a running total with a digital root of 8 (8, 17, or 26). Assuming the other player hasn't purposely or accidentally made a good move, this one will still be possible.
You'll note that I gave you the strategy, but not the explanation. In Sunday's post, I'll delve more into the analysis, including how to play the quarter-turn version if someone chooses a total other than 31!

0

Tau Day

Published on Sunday, June 26, 2011 in , , ,

Michael Hartl's Tau-based unit circleGet ready for Tau Day! Tau Day is not today. Tau Day is in two days, on Tuesday. Wait, what?!?

If you've been to this site before, you should be well acquainted with Pi. Like Pi (π), Tau (τ) is a a mathematical constant. The value is simple enough, in that it's 2π (2 × Pi).

Since Pi Day is 3/14, that makes Tau Day 6/28. So, why should we make such a big deal over Tau? That's the topic of today's Tau Day post!

Bob Palais started the ball rolling in 2001, with his editorial Pi Is Wrong! He wasn't talking about the actual number being a wrong quantity. When you divide a circle's circumference by it's diameter, you do get 3.1415... and so on. Palais' point was that it's the relationship to the diameter that's the basic problem.

Pearson Scott Foresman's geometric compass illustrationThink about how you draw a circle with a compass. You place the spike at the center of your circle, and rotate the drawing implement around that point. You're taking advantage of the fact that every point on the circle is the same distance from the center of the circle.

In other words, you're taking advantage of the circle's radius, not it's diameter. Of course, if want the formula for a circle's circumference in relation to a radius (r), then you have 2πr. When you start working with the unit circle and radians (distance of the radius around the circumference), you run into 2π again, as the number of radians around a circle.

In formula after formula, 2π pops up again and again. To use Dr. C. Douglas Haessig's analogy, if Pi is a Hollywood star, then 2 is a constant groupie. Since Tau is 2 × Pi, Tau is basically the idea that these two have been dating for long enough, and it's time for them to get married.

What happens when we replace 2π with τ? I'll let Vihart, whose videos I recommended in my previous post, explain further:



I tried to make radian conversion easy with a mnemonic (indeed, using this same video), but in the long run, you can see that Tau is a better way to go. Here are some handy links for further exploring the concepts in the above video:

Intuitive Understanding of Sine Waves
Intuitive Understanding Of Euler’s Formula
The Tau Manifesto

Robert Dixon voices his support here, and even suggests replacing Pi with τ/2:



For a better understanding of how the area is being used here, check out The Story of Pi video, and A Gentle Introduction To Learning Calculus.

Ultimately, Tau may offer a better future by offering a better understanding of mathematics, and that's definitely something worthy of celebration, and thus worthy of its own day.

This only leaves one problem. Tau is twice as much as Pi, but Tau's Greek letter has only half as many legs. Just for visual consistency, shouldn't π be 2τ, and τ be .5π? Just kidding, Tau Day fans.

3

5 Fun Teaching and Learning
YouTube Channels

Published on Thursday, June 23, 2011 in , , , , ,

Photo of James GrimeI've always tried to make learning fun, and I've found some kindred souls on YouTube who share the same passion. Check out their YouTube channels and, to quote Bill Cosby, if you're not careful, you just may learn something before it's done.

singingbanana: James Grime's YouTube channel seems like a good place to start, especially considering how many times I've mentioned him here on Grey Matters. He makes learning math fun by teaching the concepts through math puzzles. Try and figure out the one below, and when you think you know the answer (or have decided to give up), you can watch the answer by clicking here:



Vihart: Yes, another Grey Matters favorite is here on the list (What? It's my list and I can play favorites if I want). She's not often seen in her own videos, with this being a rare exception (Is it just me, or does she look AND sound like Anne Hathaway in that video?), but they are fascinating and fun ways to learn about math. In her most recent video, she show how to prove the Pythagorean Theorem by doing little more than folding and tearing paper:



dsecms: From math, we move on to chemistry class. Mr. Edmonds teaches middle-school chemistry through the use of popular songs. Here's a great example of getting the whole class involved in the song. To help you understand these videos better, here's the first one where he introduces himself and sings about the formula to determine speed:



rodolfo1114: Michael Britt's YouTube channel focuses on various aspects of psychology, and is where the videos are hosted for his Psych Files podcast. You probably won't be surprised to learn that I first ran across this channel when he was discussing the Phonetic Alphabet memory system (shown below). In the following episode, he demonstrates an amazing memory feat using the system:



historyteachers: If chemistry can be taught through song, why not history? These are very well produced videos (as you'll note below) on historical topics ranging from the beginnings of civilization (Sung to "Harajuku Girls" by Gwen Stefani) up to Napoleon Bonaparte (Sung to "Gone Daddy Gone" by the Violent Femmes). Here's my favorite, the life of Martin Luther, to the tune of the Bangles' "Manic Monday":



Do you know of any fun teaching and learning YouTube channels I missed? Let me know about them in the comments!

5

Every 823 Years?!?

Published on Sunday, June 19, 2011 in , , ,

Dafne Cholet's Calendar* photoYou've probably received the e-mail that says something like, “July 2011 will have 5 Fridays, 5 Saturdays, and 5 Sundays all in one month. This happens only once in every 823 years.”

A quick double-check of the calendar will verify that July 2011 does indeed have 5 Fridays, 5 Saturdays, and 5 Sundays. But is it really true that this happens only once in every 823 years? Since I teach calendar math on this site, I thought I'd take a closer look at this.

For those who don't already know, over at the Mental Gym section of this site, I teach a method for determining the day of the week for any date in your head. Using the math from this article, I'm going to examine the 823-year claim.

Don't worry, you don't have to have learned the method to follow this explanation. The whole method boils down to adding a key number for the year, month, and date given, and then reducing that to a number of 7 or less to get the day. If you can understand that, you should be able to understand what follows.

Since we're spanning so many centuries with this 823-year claim, let's take a look at the patterns over the centuries. In particular, notice this part:

  • 2300 to 2399 = add 1
  • 2200 to 2299 = add 3
  • 2100 to 2199 = add 5
  • 2000 to 2099 = add 0
  • 1900 to 1999 = add 1
  • 1800 to 1899 = add 3
  • 1700 to 1799 = add 5
  • 1600 to 1699 = add 0
The instructions to add a number simply indicate a way to compensate for the same date in a given century. The main thing to notice here is the 1/3/5/0 pattern that repeats. In short, this means that every 400 years, the calendar repeats exactly.

Since July 2011 has 5 Fridays, 5 Saturdays, and 5 Sundays, that also means that July 2411, 2811, 3211, and so on, will also have that same quality. We've already destroyed the 823-year claim!

Still, it does seem to be an unusual quality, and obviously doesn't happen too frequently. Just how often does a month have 5 Fridays, 5 Saturdays, and 5 Sundays?

Let's start by taking a look at a month that couldn't possibly have 5 of any day of the week. If you imagine a non-leap year February that begins on Sunday (such as February 2009), you can see that there are exactly 4 of each day of the week, because it only has 28 days. It starts on a Sunday, and ends on a Saturday.

For any 28-day February, whatever day it starts on, the last day will fall on the day of the week before it. If it started on Monday, it would end on Sunday. If it started on Tuesday, it would end on Monday, and so on.

What happens if we add a 29th day, as in a leap year? Then that February will have 5 of whatever day of the week it begins. If it starts on a Sunday, then it will have 5 Sundays: 1, 8, 15, 22, and 29. This is just as true for any other day of the week.

If we add in a 30th day, then such a month would have 5 of two days of the week. If that month started on a Sunday, then it would have 5 Sundays and 5 Mondays. By the same logic, a 31-day month beginning on a Sunday would have 5 Sundays, 5 Mondays and 5 Tuesdays. Check the calendar for May 2011 to see this for yourself.

The original claim is looking less and less unusual, isn't it? To have 5 of any three consecutive days simply requires a 31-day month, and 7 of the 12 months in any year fit that description.

Let's take the specific case mentioned in the original claim of 5 Fridays, 5 Saturdays, and 5 Sundays. After all, getting a full extra weekend in a month is quite nice!

What we're looking for, then, is a 31-day month that begins on a Friday. Any month can, of course, begin on any one of the 7 days in a week. It would seem, then, that with 7 months of the year having 31 days and a month beginning on any one of 7 days that this should happen once every year!

Let's take a closer look, and see if that's true.

See the code numbers for months on this page (scroll down towards the bottom)? The same key number means that the month will start on the same day of the week as any other month with the same key number.

Since we're just looking for 31-day months, let's examine just those months a little more carefully.

Examining the calendar patterns closely, we see that, in any given year, March, May, August, and December will always begin on different days of the week (Remember, we're only considering 31-day months). In a non-leap year, July will begin on a 5th day of week (different from the previous 4), and that January and October will share the same starting day of the week, different from the previous 5.

What about leap years? In leap years, October and July switch places. October begins on a day different from all other 31-day months, and January and July begin on the 6th remaining day of the week.

So, in any given year, the 31-day months will only cover 6 different days of the week! There will always be one day of the week on which no 31-day month begins. If that day happens to be a Friday, you won't get your 5 Fridays, 5 Saturdays, and 5 Sundays in that year.

If the Friday doesn't show up as the starting day of any 31-day months in a given year (which last happened in 2007), it's more likely to happen in the following year. This is because the same date in each year happens 1 day later in the following year (or 2 days later after a leap year).

In short, finding a month with 5 Fridays, 5 Saturdays, and 5 Sundays happens a little over once a year. If a January happens to fall on a Friday, then it will happen twice in the same year, with other occurrence being in either October (in a non-leap year) or July (in a leap-year).

Taking a look at the calendar beyond July 2011, we see that it happens again in March 2013, August 2014, May 2015, and January and July of 2016, so that seems about right.

If you decide you do want to preserve the magic of an 823-year occurence, you can always offer a 2012 calendar as proof, since 5 Fridays, 5 Saturdays, and 5 Sundays don't happen that year.

0

Scam School Mathemagic

Published on Thursday, June 16, 2011 in , , , , ,

Scam School logoScam School is back, and this week, they're teach one of my all-time favorite mathematical tricks!

This is one of those tricks that appears simple, and isn't hard to do, but the reason why it even works in the first place is almost as magical as the performance.

Let's get right to the trick itself, shall we?



This is a classic of mathemagic. I've mentioned it many times on this site, and wrote about it in detail last August (You'll note I used the same Fibonacci video in my recent Fractal post).

If you want some practice multiplying by 11, The Math World features a post that includes practice exercises at the end.

Why does multiplying the 7th term by 11 always give the same answer? If you work it out algebraically, it's not difficult to see why. Assume we start with 2 numbers, which we'll call a and b. The numbers will then be as follows:

 1.   a   
 2.         b
 3.   a +   b
 4.   a +  2b
 5.  2a +  3b
 6.  3a +  5b
 7.  5a +  8b
 8.  8a + 13b
 9. 13a + 21b
10. 21a + 34b
--------------
    55a + 88b
Now, look at that 7th term, 5a + 8b. If you multiply that term by 11, it would work out as 11(5a + 8b), and multiplied out that comes to 55a + 88b! Like I said earlier, the fact that such a pattern exists in the first place seems magical.

Let's take another look at the Fibonacci video I mentioned earlier:



Let's see what other interesting patterns we can gleam from this. First, is that the multiples of the variable follow the Fibonacci sequence: 1, 1, 2, 3, 5, 8, 13, 21, 34. This also means that the later consecutive numbers will have the same relationship as all Fibonacci numbers, the ratio φ (Phi).

Thanks to that, there a bonus effect you can do with this trick. Assume you've performed the Fibonacci trick, and have set the paper with the numbers on it to the side where you can't see it. Ask the person with the calculator to divide the 10th number by the 9th number, but not to let you see the result. Act as if you're trying to work it out mentally, and strain saying, "I believe it's something like...1.61?" The fact that you got that far in your head will be very amazing to most people.

If you like Scam School, and this Fibonacci pattern intrigues you, then you'll probably be thrilled to know that it can be used to win in special versions of Nim!

First, there's Fibonacci Nim. Like most versions of Nim, this is a take-away game in which the last person to remove an object is the winner. The new rules for Fibonacci Nim is that, on the first move, the first player must leave at least 1 object for the other player, and on each successive turn, the next player is limited to removing no more than twice the amount removed by the previous player.

If you'd like to play Fibonacci Nim for yourself, try this Javascript version. Explaining how to win requires more space than I have in this post, but I covered it in detail last September.

That's how you can use the Fibonacci numbers themselves to win Nim, but did you know there's a version that is based on φ itself? It's rather unusual, as you might guess.

Here is a page with a game called Queen Nim. As stated in the rules, There is a queen on the board left hand. The queen can go right or down. Goal of the game is to let no moves for your opponent. Thus, wins one who makes last move putting the queen to the right-bottom corner.

This doesn't seem like Nim, and certainly doesn't seem like it could have anything to do with φ! First, the reduced options for moves after each turn effectively replace the removal of objects, and make this version of Nim more subtle and harder to work out.

In this explanation, excerpted from Martin Gardner's book, Penrose Tiles to Trapdoor Ciphers, you not only learn how to find safe spaces for the game, but also exactly how φ comes into it! Thankfully, once you've memorized the safe spaces for a standard board, you don't need to do any mental math involving φ.

0

Still More Quick Snippets

Published on Sunday, June 12, 2011 in , , , ,

Luc Viatour's plasma lamp pictureThis month's snippets are all about online widgets, apps, and even toys that help you understand new concepts more effectively.

• If you enjoyed my recent post on the Prisoner's Dilemma, there are a few places you can try out computer simulations online. Dave Reed's site features this simple Prisoner's Dilemma Simulation. A far more robust version can be found herem including historical results, and a wider variety of strategies to try. You'll definitely want to click the Help button to learn how to use this one.

• I've mentioned BetterExplained.com numerous times on this blog. They have a new project up called “Aha”. It's a place for short posts, like Twitter, but specifically posts that helped make a difficult concept easier to understand. Since it's tricky to understand, here's a video that explains the concept better:



• If you've ever played with Wolfram Alpha, you already know its amazing power (if not, here's a good starter video). Now, they've taken this search engine one step further by allowing you to create widgets from your Wolfram Alpha queries! This allows you to embed a specialized knowledge base on your website, as explained in this video:



0

Iteration, Feedback, and Change: Chaos Theory

Published on Thursday, June 09, 2011 in , , , , ,

Rogilbert's Lorenz attractor renderingBefore I wind this series up and tie everything together, it's time to go back to the beginning. Not the beginning of this series, but to the beginning of our desire to understand the universe.

When it came to explanations for our universe, the first one we had was the deity theory, as described in this post by Douglas Adams.

From there, we gradually saw how math accurately reflected the real world, and came to think of nature as a clockwork universe. In the Infinitely Reasonable episode of James Burke's The Day The Universe Changed, you can learn society as a whole came to this belief, right up through the days of Sir Isaac Newton.

Newton brought a certain clarity to physics. Perhaps that clarity was its own undoing, because with better tools, that classical Newtonian clockwork universe began to fall apart. In another episode of The Day The Universe Changed, entitled Making Waves, we learn about the problems with Newtonian physics, and just why Einstein's new relative physics was so important:


The Day The Universe Changed 09-10 Making Waves... by costello74

For a more detailed explanation of Einstein's theories, as well as physics during and after Einstein's life, I highly recommend watching the episode of The Universe called Beyond the Big Bang.

This new relative universe is still being explored. It makes the universe more complicated and chaotic, so what exactly does that mean for how the universe works?

Not surprisingly, a new branch of science came along to study exactly this question. It is called chaos theory. Most people were first introduced to the idea of chaos theory in the movie Jurassic Park, where Jeff Goldblum played it off as if it were just preparing for the unexpected.

Chaos theory is actually quite a bit more substantive than that. It's about examining dynamic processes that are highly affected by their initial conditions, and the great effects even the most minor conditions can have on these processes.

Having talked about artificial life, evolution, game theory, fractals, and now physics, chaos theory is the field that ties all them together in a coherent package. In the documentary below, The Secret Life of Chaos, you'll learn about its amazing and tragic beginning, and how it encompasses what I've been discussing in this series.


The.Secret.Life.of.Chaos. by costello74

This is an excellent framework for the strange new world in which we find ourselves, and for examining how it unfolds.

As you read this, you're taking advantage of one of the results of iteration, feedback, and change - the internet!

As a user of the internet, you're already better equipped to understand the potential power and effects of chaos theory than anyone who lived before you. The internet was designed as a system that could remain functional by dynamically routing around physical damage. That basic design philosophy has remained as a basis, but grown into a system that works around any kind of damage.

In what I think of as one of the more underrated stories of 2011, Twitter provided a wonderful example of this. Canada has a ban on transmitting election results to a wide audience before all the country's polls are closed. However, since you can read tweets from anywhere in the world at any given time, this law proved to be of little concern. Canadians with knowledge of election results simply e-mailed what they knew to people from other countries, where that knowledge was tweeted without worry for consequence for anyone to read, anywhere in the world (especially in Canada).

It's amazing how much development you can find with just a little searching. Here's a Discover Magazine article on robots that used evolutionary programming to teach themselves to lie. Many were surprised 4 years ago, when the accuracy of the mass-edited Wikipedia was actually found to have a better accuracy than traditional encyclopedias edited by dedicated experts. Even more recently, we're learning about how the power of the individual can make the difference between “the wisdom of the crowds” and a dumb herd.

I hope you've enjoyed this series, and that it gives you food for thought. If you've learned anything from this series, or have any insights you'd like to share, I'd love to hear about it in the comments!

2

Iteration, Feedback, and Change: Fractals

Published on Saturday, June 04, 2011 in , , , , ,

Wolfgang Beyer's Mandelbrot set renderingSince we've talked about artificial life, biological life, and even behavior and decisions, you might start getting the idea that the impressive developments of iteration, feedback, and change require a living entity to work.

If you take a closer look, you'll find not the case.

All you need is an object and a process that will act on that object. To allow for iteration, you simply need time and the potential for the process to be repeated over that time. The process involved with naturally provide the feedback, and the altered status is, of course, the change itself.

This is why so much of the development of complexity out of simplicity that we've seen can be explored with computers. Conway's Game of Life, and the prisoner's dilemma tournaments were both done entirely on computers. Neither computer simulation could be said to contain actual living elements, but the way the elements acted, it does effectively make us question our definition of life.

Take a look around you right now at all the man-made items you see. Consider their shapes, and you'll probably notice lots of straight lines, smooth curves, arcs, and easily-identifiable angles (45 degrees, 90 degrees. etc.). It's not too hard to realize that you could, with a little work, duplicate these shapes by working out their formulas.

Now go outside, and take a look at the shapes produced by nature, such as plants, animals, mountains, water flows, and more. The shapes in nature, while still retaining some design elements are a lot less organized. You may be thinking that you either couldn't develop a formula for these shapes, or that only one formula comes to mind: that of the Fibonacci Numbers and the Golden Mean:



The description I gave earlier involving objects, processes, and time makes it sound like these qualities could be examined in a more rigorous, mathematical way, especially considering that they're not feature unique to living beings.

The video above on Fibonacci numbers also suggests that there's a mathematic process at work in nature, but what is it? The tools to begin understanding shapes in nature, much like the tools of game theory, have only been developed recently.

It took the vision of the late Benoit Mandlebrot to develop this new field of math. To do so, he had to look beyond just the descriptions of the shapes themselves, and take the process that created them into consideration. By doing so, he developed a field called fractal geometry, explained in this NOVA documentary, Hunting the Hidden Dimension:


Fractals: Hunting The Hidden Dimension by silichip

As you've seen, fractals have already proven themselves as an essential component of cellphones, and it doesn't seem like it will be long before, say, the fractal dimension of your blood flow and the true nature of your heartbeat will be as much a part of your medical record as your temperature. Who knows what other fields will be affected and how? We're just beginning to see the possibilities.

If you'd like to delve a little deeper into the nature of fractals, as opposed to just their effects, check out Arthur C. Clarke's documentary, Fractals: The Colors of Infinity.

Study of fractal geometry seems nice, and it's very impressive that it can be applied in such amazing ways to so many aspects of the real world. What is it, though, that makes them so applicable to the real world? In the next and final post in this series, we're going to see take a closer look at this strange connection to the real world, and what it means for us and our future.

0

Iteration, Feedback, and Change: Prisoner's Dilemma

Published on Thursday, June 02, 2011 in , , , , ,

Dylan Oliphant's prison cells pictureNow we've seen the amazing power of complexity arising from simplicity, and learned how it was primarily the study of evolution that made this power part of our mental toolbox. You've also seen the effect that Haeckel, Sumner, and Marx had when applying this to society at large.

There's still the problem of cooperation. Why did early man, and even many animals, ever come around to cooperating in the search for food and other resources?

If you'd asked an astronomer before Copernicus about their field, they would've explained that astronomy was the study of how the planets and the sun revolved around the Earth, in order to develop a more accurate calendar. The very definition prevented most researchers from even considering that the Earth might go around the sun, instead.

Similarly, if you'd asked a biologist about cooperation before 1960, you'd most likely get the reply that cooperation wasn't even something a biologist would study. It was outside their field. Biology was so intertwined with the concept of “survival of the fittest”, that cooperation wasn't something they'd consider too much.

Often misunderstood, especially in attempts to apply evolutionary principles to society, was the fact that “fittest” didn't refer to the most physically fit, but rather those organisms that were best adapted to change.

So what happened that suddenly changed our minds about the nature of cooperation? The development of a new field in mathematics called game theory. Game theory really gets its name from studying the strategies of multiple players who are working to win a game, and who may even use deceit to try and win (such as bluffing in poker). Since it studies behaviors and decisions, it can apply to other things like economics, politics and more. Similarly, geometry started out as a way to analyze real estate, but isn't limited to real estate problems.

John von Neumann, along with Oskar Morgenstern, had been developing effective tools to analyze strategies and decisions since the 1920s, and in 1944, during World War II, they published a major work on the topic, titled Theory of Games and Economic Behaviour.

Think about the time at which this is going on. A year after Neumann's and Morgenstern's work was released, Nazi Germany surrenders, and Japan surrenders after the US drops atomic bombs on Hiroshima and Nagasaki. The US has become the only atomic superpower, and expects to remain so for quite some time. This all changes just 4 years later, on August 29, 1949, when the Soviet Union tests its first nuclear weapon.

Now that there were two nuclear superpowers, the US had to consider its options very carefully, as did the Soviet Union. With knowledge of the tools of game theory, John von Neumann and many other top minds initially supported the idea of “preemptive war”, the idea that America should attack first without provocation, so as to have the best chance at winning a nuclear war.

Just a few months later, however, in January 1950, Merrill Flood and Melvin Dresher turned game theory upside-down when they proposed a game that would be popularized as the prisoner's dilemma.

The situation is set up like this:

Imagine you and another person are arrested by the police. The police don't have sufficient evidence for a conviction, so the police split you and this other person up into different rooms. They offer you and the other person the same deal:

• If both you and the other person stay silent, both of you will wind up spending 6 months in jail (the police have enough evidence for a lesser charge).
• If both you and the other person agree to testify against each other, you will both spend the next 2 years in jail.
• If one agrees to testify for the prosecution, and the other person remains silent, the one who testifies will go free, while the one who remains silent will go to jail for 10 years.

What makes this situation so interesting is that the best outcome for both players is to cooperate, but since each player has to make a decision without knowledge of the other player's decision, the most rational choice is for each player to talk. The most rational choice, however, winds up in a less-than-ideal result for both players.

Further confounding things was the fact that, when the same two people played this repeatedly, it wasn't uncommon to see frequent cooperation among the two people. If the most rational choice in a single play is to rat out your partner, why does cooperation show up so much when its played repeatedly?

Originally, the prisoner's dilemma was considered a paradox to be solved. The applications to the immediate nuclear situation was quite clear, so there was obviously great interest in analyzing this game. We'd quickly learn that this was more a reflection of much of the human condition, instead.

In the following documentary, Nice Guys Finish First, you'll discover the surprising lessons and applications of the prisoner's dilemma. Especially surprising is what happened in 1980, when a concentrated search for the most effective strategy for the iterated prisoner's dilemma began.


Bbc Horizon - Nice Guys Finish First (Richard... by singaporegeek

To learn more about the prisoner's dilemma, I also suggest watching For All Practical Purposes: Prisoners Dilemma and Mathematics Illuminated: Game Theory as clear and excellent resources.

In his book The Evolution of Cooperation, author Robert Axelrod analyzed all the top-scoring strategies, and found the following four qualities to be essential and effective.
NICE: By far, the most important quality was that the strategy start out by cooperating with the other player. Ironically, this winds up being a purely self-interested play for purely self-interested reasons, even though the “survival of the fittest” principle would lead you to expect otherwise.

RETALIATING: A successful strategy will always betray the other person after it has been betrayed, to show that there will be a price for betrayal.

FORGIVENESS: When an opponent returns to cooperation, an effective strategy will also return to cooperation. The balance of forgiveness and retaliation helps establish that a player can be trusted.

NON-ENVIOUS: Finally, a strategy shouldn't strive to outscore its opponent's strategy. Once this happens, the effectiveness of the other qualities become diluted.
That's why the “Tit For Tat” strategy was so effective. It combined these qualities more effectively than any of the other strategies. Interestingly, even people trying to cheat in these prisoner's dilemma competitions have found that “Tit For Tat” is still more effective (usually because cheating strategies aren't non-envious).

The first time the 200-game tournament was held, it was a complete surprise that “Tit For Tat” was the dominant strategy. The second time, programmers developed their strategies to beat “Tit For Tat”, usually with variations like “Two Tits For Tat” (Defect twice after any defection), “Tit For Two Tats” (Ignore isolated defections, and punish only after two consecutive defections), and “Almost Tit For Tat” (Tit for Tat that throws in a random defection occasionally). The original “Tit For Tat” strategy still came out on top!

For the third competition, there was a new an interesting twist. Several 200-game tournaments would be played. The first would be played as originally, with 1 copy of each strategy in play. After 200 games, the points would be displayed, and strategies with 0 points would be eliminated in the next round. All surviving strategies would then be played based on their score in that first round. For example, if “Tit For Tat” scored 15% of all the points in the first round, then 15% of the strategies in the next round would be “Tit For Tat”, and a poor strategy only score 1% of the points, then only 1% of the strategies in that next round would be that poor strategy.

This gave the game a new dimension, similar to the “artificial selection” used in Conway's Game of Life. What happened? In the earlier rounds, the weaker strategies died off quickly, while “Tit For Tat” and some more exploitative strategies, such as “Always Defect” survived (which always did great against the very weak “Always Cooperate”).

After several more generations of weak strategies dying off, something surprising began to happen. The predatory strategies began to die off because the weaker strategies off of which they fed became rarer and rarer. “Tit For Tat” going up against the surviving “Always Defect” patterns would wind up always defecting too, minimizing the advantage of the latter strategy.

Eventually, “Tit For Tat” wound up dominating the game yet again. With this natural selection addition, you could dramatically see the pressure it puts on alternative strategies to cooperate.

For a more detailed course in game theory in general, I recommend William Spaniel's Game Theory 101 site, Ben Polak's Game Theory Course at Yale, as well as the documentary about Josh Nash, A Brilliant Madness, which can help clear up any misconceptions you may have picked up from watching A Beautiful Mind. You also might be surprised at the amount of game theory references in pop culture.

At this point, we not only have nature demonstrating how life behaves and interacts, but a mathematical understanding that gave us a more complete picture of the principles and results.

As I've repeatedly said, game theory, and especially the prisoner's dilemma, are more wide-ranging than just simple games. The question then becomes, how can we use the amazing powers of iteration, feedback, and change to improve our lives as we move into the future?

The answer required development of yet another new branch of mathematics, and is the topic of the next post in this series.