Thu 17 Oct 2013

In my last post, I explained the game theory mathematics of the standard game of Rock Paper Scissors (RPS), and what the Nash equilibrium is and how it works. Now I will show a similar analysis, but based on a variant of RPS where winning with rock gets you 1 point, winning with scissors gets you 2 points, and winning with paper gets you 3 points.

We can solve for the Nash equilibrium by doing a probability tree of all the different possible combinations, as before with the standard RPS. In instances where you win, add that many points. When you lose, subtract that many points (your opponent gaining points is equivalent to you losing points). As before, we will choose R, P, and S according to some random distribution, where is our probability of choosing rock, is our probability of choosing paper, and is our probability of choosing scissor. Similarly we define the variables , , and to be the probabilities chosen by our opponent. Doing so, we get the following expected point gain each time the game is played:

Remove the zero terms and then group them by ‘, , and :

For the equilibrium condition, we want the total to be 0, and we want it to be independent of whatever our opponent chooses for , , or . We do that by saying that each of the terms in the parenthesis must be equal to 0. This gives us 3 equations:

Solve the 3 equations for the 3 unknowns, and you get , , and . as it should. You can substitute these into the original equation and prove to yourself that no matter what your opponent chooses for , , and , the expected return is always 0.

So… if you’re one of the 3 people on the entire internet that is still reading this thing, you may be asking yourself, “what about when my opponent *doesn’t* choose the Nash equilibrium? Is there an optimum that will maximize my expected score?”

It turns out there is, and the answer is a quite simple, though getting there takes a bit of algebra. Take the formula for the expected point gain as before (terms equal to 0 eliminated):

Use and to eliminate and so it’s only in terms of , , , and , and define this as a function :

Now we have a surface function of two variables, & , with and being constants. We can imagine this being in a 2D domain with R as the x-axis and P as the y-axis. So the domain in question will be a triangle with vertices at points , which corresponds to and . Similarly, corresponds to and , and corresponds to and .

We could then take the calculus approach and find the local maxima of , but we really don’t have to do that. Looking at , it is linear w.r.t. both and , and so is a flat plane, tilted in some direction determined by and . Also we know from calculating the Nash equilibrium that (i.e. when we choose the Nash equilibrium that expected point gain ), and that when and that the plane will be coplanar with the X-Y plane [i.e. when our opponent chooses the Nash equilibrium then will always be zero no matter what we choose, that is only possible when the flat plane is zero everywhere].

Any deviation from those values by and will cause the plane to tilt in some direction. Since it is now tilting, there will be a direction that increases the value of , and since has a constant slope everywhere, the maximum within the triangle-shaped domain must be on one of the points , , or [or perhaps two of the points will have the same maximum value].

So the optimal choice will either be (with ), (with ), or (with ). Simply evaluating the first equation above for these three choices and seeing which one maximizes the expected point gain will give you your choice.

So for example, here is the surface for we get when our opponent chooses rock, paper, and scissors with equal frequency, or . In our equation was eliminated, so for calculating we just need to say that :

In this plot the gray triangle is the original domain in the previous image, the red triangle is the actual surface of , and the black arrow is a normal vector from the plane coming from the Nash equilibrium point at .

It appears that the red triangle is tilted along the line, so that both and , or in other words playing scissors every time or paper every time (or some mix of the two) will give you the maximum expected point gain of every time that you play.

Now what if our opponent chooses a strategy that is *close* to but not quite the Nash equilibrium? How much of a point gain can we expect by using the optimal strategy?

Here is the surface for if our opponent chooses :

The slope of the response surface is much less, as reflected by the fact that our opponent’s choice of R,P, and S is much closer to the Nash equilibrium. In this case, it turns out that choosing either rock or paper every time will give the optimal point gain, in this case an expected point gain of only 0.03 each time we play.

Of course choosing rock or paper every time would be something that our opponent, if not a mindless robot, would be able to easily capitalize on. What if instead we only slightly perturb *our* choice of away from the Nash equilibrium like our opponent did, but instead perturb it in the direction that gives us the maximum expected point gain?

The distance we move from is the same as our opponent. He moved 0.01 in the R-direction, and -0.01 in the P-direction, for a total distance of . We know that playing rock or paper every time will maximize our expected score, this corresponds to the line . The shortest path to this line is to move perpendicular to it, which is the vector direction . So this means we should choose the point . If we do so, our expected point gain each time we play the game is .

My next question is then, how many games do we have to play in order to have, say, a 95% confidence that our score will be greater than our opponent? Remember that what we calculated above is the *expected* gain, which is kind of like taking a series of many, many games and taking the average. The outcome of any 1 game is random, but after many games we expect to pull ahead slightly. However I’m really rusty on this kind of math and I don’t remember how to do it. I’ll have to review a bit.