• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Game theory.....is it useful?

I'm not sure exactly what "rational" means, as we're using it here. I think it would be interesting to try to formulate a careful definition.

My understanding is that "rational" is actually shorthand for a number of concepts. In general, a "rational" player is knowledgeable (she understands the rules of the "game" and the various options available to her), self-aware (she understands the payoff matrix and will act to maximize her expected payoff), mathematically capable (her calculations of "expected payoff" are correct), and, finally, risk-neutral (meaning that she's only interested in the "expected" payoff and is not concerned with risk management).

But the theory of games itself is essentially a definition of what "rational" behavior is -- really, under the standard formulation, the "rational" player is the one who does what game theory says she should do. The point is that if you do something that does not maximize your expected payoff, you are doing something wrong and "irrational." The question, of course, is what should you do to achieve that maximum?

But, in the meantime, it seems clear to me that the assumption of rationality, together with the assumption that you can't control the other person's choice, implies that, when trying to decide what choice to make, you should compare outcomes that are identical except for your choice, because your choice is all that you control.

It's actually a little simpler than that. A fundamental concept of game theory (from the original Von Neumann and Morgenstern formulation) is that of "domination." Strategy X "dominates" strategy Y (X >> Y) if and only if, for every possible situation, the payoff of strategy X is greater than or equal to the payoff for strategy Y (and they differ in at least one situation). It's an easy theorem that in such a situation, a "rational" player should never play the dominated strategy.

Notice that in this formulation, there is no real notion of "what the other player thinks," because it's not necessary. Your opponent may be rational, your opponent may not be rational -- you neither know nor care.

And, obviously, the strategy of "confess" dominates "remain silent." Q.e.d.
 
Notice that in this formulation, there is no real notion of "what the other player thinks," because it's not necessary. Your opponent may be rational, your opponent may not be rational -- you neither know nor care.
Is this the approach taken by game theory in general, or just in the specific case of the Prisoners' Dilemma?

In the case of the Prisoners' Dilemma, I agree that ultimately it doesn't matter whether the other player is assumed to be rational or not, although the case is a bit tricky and I do see Hofstadter's point. But in some cases, it does matter whether I assume the other player to be rational or whether I don't make that assumption. For example, consider AmateurScientist's scenario, in which the best outcome for both players results if they both remain silent. So, remaining silent would seem to be the rational choice, but only if each player can assume that the other player will make the same rational choice. If, on the other hand, the other player will confess, or, even, if there's a reasonable chance that he will confess, then it is no longer a good idea for me to remain silent.

What is the "official" opinion of game theory in this situation? When choosing a rational strategy for myself may I assume that the other player is likewise rational, or am I supposed to pick a strategy that will work as well as possible regardless of what the other player does?

AmateurScientist said that the players should confess. So I guess he was going for a strategy that doesn't make assumptions about the other player's rationality.

This situation is one where it does make sense to follow gnome's idea of remaining silent if you believe that the other player is rational enough also to remain silent, but confessing otherwise.
 
But suppose you knew that the other prisoner had read Hofstadter, was totally convinced by his reasoning, and was therefore sure to remain silent. Wouldn't it still be better for you to confess?

Of course, it would.

It seems to me that Hofstadter's reasoning is equivalent to the assumption that you can control the other person's choice, which everyone agrees you can't do.

I'm not sure exactly what "rational" means, as we're using it here. I think it would be interesting to try to formulate a careful definition. But, in the meantime, it seems clear to me that the assumption of rationality, together with the assumption that you can't control the other person's choice, implies that, when trying to decide what choice to make, you should compare outcomes that are identical except for your choice, because your choice is all that you control.

The two prisoners face the same situation, so it is certainly true that if they both follow the same reasoning process they will both make the same choice. In particular, if they are both rational (whatever that means), they will both make the same choice. However, that doesn't imply that, when you're trying to decide what choice to make, it is correct to say, "If I remain silent, so will the other prisoner." At that point in time, while you're still deciding, you don't yet know what the rational choice is---that's what you're trying to figure out. If the rational choice is to confess, which is still a possibility as far as you know, then by remaining silent you are not being rational, therefore the original assumption that both prisoners are rational no longer holds, and therefore there's no reason to suppose that they will both make the same choice.
Another example Hofstadter gives explains the point perhaps a little more starkly--he never claims you have any control over the others' choice. Consider a different scenario: A mad billionare has offered his estate to twenty people (including you) who are known to be leading thinkers. To claim it, all you have to do is send in a postcard with your name. However, if more than one post card is received, there is no winner at all. (it is assumed for this scenario that there is no way to communicate with the other participants). He concludes that the best chance for a player to win is to roll a die, with the odds set to maximize the chance that exactly one die will come up favorably if everyone rolls with the same idea (he actually calculates what the die-odds should be).

While it's true that if your die does not come up favorably, you could send in a postcard anyway--but by accepting that as a possibility, all you're doing is eliminating the chance that ANYONE will win the prize. The only way for you to win is for there to BE a winner, and the only way for there to BE a winner with any reasonable probability, is to play that dice game faithfully, and hope that everyone else does too.
 
Is this the approach taken by game theory in general, or just in the specific case of the Prisoners' Dilemma?

This is the underlying basis of the domination theorems. Any dominated strategy should not be played, regardless of the opposition.

In the specific case of the Prisoner's Dilemma, the domination theorem still applies. Even if you could pick both strategies (you can somehow mentally control the other prisoner -- straight out of the four-color comics), you can "force" him to remain silent, and it's still in your best interest to rat him out, because then you get a walk.

In general, of course, knowing your opponent's strategy is an advantage -- if you know your opponent at rock paper scissors to be paper-phobic, then you can adjust your strategy to never play scissors.

In the case of the Prisoners' Dilemma, I agree that ultimately it doesn't matter whether the other player is assumed to be rational or not, although the case is a bit tricky and I do see Hofstadter's point. But in some cases, it does matter whether I assume the other player to be rational or whether I don't make that assumption. For example, consider AmateurScientist's scenario, in which the best outcome for both players results if they both remain silent. So, remaining silent would seem to be the rational choice, but only if each player can assume that the other player will make the same rational choice. If, on the other hand, the other player will confess, or, even, if there's a reasonable chance that he will confess, then it is no longer a good idea for me to remain silent.

What is the "official" opinion of game theory in this situation? When choosing a rational strategy for myself may I assume that the other player is likewise rational, or am I supposed to pick a strategy that will work as well as possible regardless of what the other player does?

You are supposed to pick a strategy that will maximize your payoff. The usual formulation is the so-called "minimax" strategy -- the strategy that leaves you in the best possible position even if the other player does his best. (Technically, to maximize your minimum gain, or equivalently, to minimize your maximum loss.) If you have information that the opponent is using a probabilistic strategy and you know his probability distribution, then you can sometimes improve upon the minimax payoff -- but it's rare that you're playing against idiots.

This situation is one where it does make sense to follow gnome's idea of remaining silent if you believe that the other player is rational enough also to remain silent, but confessing otherwise.

Why? If you know that your opponent is rational enough also to remain silent, then rat him out and walk free!
 
The only way for you to win is for there to BE a winner, and the only way for there to BE a winner with any reasonable probability, is to play that dice game faithfully, and hope that everyone else does too.
I would just hope that everyone else plays faithfully, and forget about the "me playing faithfully" part.

My goal is not to maximize the probability of there being some winner. My goal is to maximize the probability of me being the winner. A lot of that is not up to me; it's up to all the other players. Obviously, I can't do anything about that. But I should do what I can do. If I don't send in a postcard, I definitely won't win. If I do send in a postcard, I might win. So I don't see the argument for not sending in a postcard, if my goal is to win the money for myself rather than just to win it for somebody.

Can you remind me where Hofstadter discusses this stuff? I know I've seen it before, but I don't remember where.
 
You are supposed to pick a strategy that will maximize your payoff. The usual formulation is the so-called "minimax" strategy -- the strategy that leaves you in the best possible position even if the other player does his best. (Technically, to maximize your minimum gain, or equivalently, to minimize your maximum loss.)

[...]

Why? If you know that your opponent is rational enough also to remain silent, then rat him out and walk free!
You say, "even if the other player does his best", implictly assuming that what's better for the other player is worse for me. But AmateurScientist's scenario is not the standard Prisoners' Dilemma. In his scenario, the best possible outcome results if both prisoners remain silent. If the other prisoner remains silent, it is better for me to remain silent than to confess. But if the other prisoner confesses, it's better for me to confess too. And it's better for both of us if we both remain silent than if we both confess.

If I have reason to believe that the other player will confess, then I should confess. But why would the other player confess, unless he thinks that I might? And why would I, unless I think that he might? etc.
 
You say, "even if the other player does his best", implictly assuming that what's better for the other player is worse for me. But AmateurScientist's scenario is not the standard Prisoners' Dilemma. In his scenario, the best possible outcome results if both prisoners remain silent. If the other prisoner remains silent, it is better for me to remain silent than to confess. But if the other prisoner confesses, it's better for me to confess too. And it's better for both of us if we both remain silent than if we both confess.

If I have reason to believe that the other player will confess, then I should confess. But why would the other player confess, unless he thinks that I might? And why would I, unless I think that he might? etc.

That is remeniscent of yet another dilemma that becomes really bad when you add people to the mix. The scenario is called "Wolf's Dilemma" and the best possible outcome is for everyone to "Cooperate" (a general class of answers equivalent to remaining silent in the Prisoner's Dilemma scenario) rather than "Defect" (the class of answers equivalent to confessing). For example... say 20 people in cubicles with a buzzer and a button. When the buzzer sounds, if everyone presses the button, all receive $1000. If you don't press the button, you get only $100. But if even one person chooses not to press the button, everyone who did gets nothing.

Does AmateurScientists's scenario count as a two-person Wolf's Dilemma?
 
I would just hope that everyone else plays faithfully, and forget about the "me playing faithfully" part.

My goal is not to maximize the probability of there being some winner. My goal is to maximize the probability of me being the winner. A lot of that is not up to me; it's up to all the other players. Obviously, I can't do anything about that. But I should do what I can do. If I don't send in a postcard, I definitely won't win. If I do send in a postcard, I might win. So I don't see the argument for not sending in a postcard, if my goal is to win the money for myself rather than just to win it for somebody.

Can you remind me where Hofstadter discusses this stuff? I know I've seen it before, but I don't remember where.
But if you apply a reasoning method that automatically sends in a postcard, presumably everyone else applying the same reasoning will also send in a postcard, and your chances of winning are ZERO. You can do better than zero if you pick a strategy that is more likely to succeed if duplicated. To be honest? There are competing businesses that play it my way all the time-- a phenomenon called "tacit collusion".

It's all in "Metamagical Themas", a compilation of his columns in Scientific American--full of other fascinating material on other topics as well. One of my favorite books.
 
But if you apply a reasoning method that automatically sends in a postcard, presumably everyone else applying the same reasoning will also send in a postcard, and your chances of winning are ZERO. You can do better than zero if you pick a strategy that is more likely to succeed if duplicated. To be honest? There are competing businesses that play it my way all the time-- a phenomenon called "tacit collusion".
But your choise of what reasoning method to use is completely separate from theirs.
By choosing to send in a post card you aren't making it any more likely that they will. Say you try rolling the die and it comes up that you don't send in a post card. At this point, what should you do? (remember we're talking about for your own self-interest).
You can't change the past, you can't change what the others will do. Do you send in a postcard?

Your reasoning is assuming that you can count on others to do what you do, but you can't. It's like assuming that because you've decided to go along they all will, but the moment you decide to cheat them, everyone else will automatically decide to cheat as well. I can't see any justification for that assumption.

Now, if your responses were dependent upon a computer program, and you knew that all the other players responses would also be dependent upon that computer program, then when writing the program you should try to choose what is best for you if everyone else does it. But if you had the ability to change your program, you can't assume that doing so will change everyone else's, and even less that not doing so will stop everyone else from changing theirs.
 
couldn't you caluclate the probability of the other guy confessing-- based on how well you know him, and then factor that into the scenarios to get expected values.

The expected values would then tell you whether confessing or not has the best long run positive return.
 
Your reasoning is assuming that you can count on others to do what you do, but you can't. It's like assuming that because you've decided to go along they all will, but the moment you decide to cheat them, everyone else will automatically decide to cheat as well. I can't see any justification for that assumption.

Here's where that breaks down (in my opinion)... take 20 guys like me and put them in that situation, one of us might win something... take 20 guys who disagree with me, and none of them will get anything... there's gotta be something to that.
 
Here's where that breaks down (in my opinion)... take 20 guys like me and put them in that situation, one of us might win something... take 20 guys who disagree with me, and none of them will get anything... there's gotta be something to that.
Sure, but that only suggests that you should prefer to play with others who you know will cooperate, not that you should cooperate yourself.
Of course having a history of cooperating with others is likely something that will allow you to play with those who are also likely to cooperate. But that only applies to multiple games. One-off games are counter-intuitive because, well, they don't happen very often.
 
I'd just like to add one point to this discussion. The cold facts and outcomes of games theory don't necessarily apply to everyday life. At least not in a simple way.
The reason is that humans don't necessarily have a simple payoff matrix. I think I'd rather someone wins than no-one, even if it very slightly makes my own chances less.
I'd rather keep my word to a friend than get a better outcome by cheating them, even if there were no way they'd find out about it.
But that's because I attach a value to things like integrity or honesty or whatever. If we added those things to the payoff matrix it would look different.
That's because human desires and emotions are complex, I might attach less value to the $100 on offer than I do to being honest. But if we could put that value that I attach into the payoff matrix we'd see that my honesty (or whatever) was the rational choise given what I value.
 

Back
Top Bottom