The RAND Corporation released as a free pdf their seminal 1954 book The Compleat Strategyst: Being a Primer on the Theory of Games of Strategy. Yes it’s math, but it seems pretty approachable as game theory texts go.

I own a physical copy of that book! I haven’t read it in years.

That is going on my kindle TODAY.

This is basically the only place where I can guarantee that a topic with this title is going to be something good. I have read an Oxford university published quick guides to game theory which basically gave me the background I needed to understand what I didn’t already know from my degree.

It would be good to read an actual a book from the field instead of a layman introduction to it.

For anyone interested in a more up-to-date book, Ken Binmore’s “Playing for Real” is quite good; it has a decent amount of the serious math but avoids being overly dry by having a focus on *lots* of examples.

Holy shit Ironfist is real.

would there be interest in breaking down modern board games into their game theory components? I know in some of the PAX talks there have been general statements about some sub-games that are contained, or how despite other mechanisms they boil down to a certain game theory game.

But, a detailed breakdown of the full mechanics of a game and how those pieces fit together.

I would expect a thread for each game in which we discuss the levels within a game.

any takers?

I’ve been ruminating on a panel like this for a while. I just never put it together.

Deconstructing Tabletop. We’d break a bunch of big-name games down to their core component mechanics.

i would certainly be interested in hearing your thoughts on various games, as well as how you actually do the “boils down” simplifications like how you suggested Risk is just a “vote who wins” game.

while i agree with the conclusion, how do you see past the minutiae of mechanics and decide that’s what is going on?

feels related, but maybe should be a separate topic is what do we do with game theory conclusions that fail in real world? like how sometimes irrational strategies overcome the rational ones.

One large aspect that complicates careful analysis of games is that many “game theory” games use simultaneous selection. Colonel blotto is very different if you have to place 1 unit at a time, versus pre-deciding all your placements then revealing them at once.

I thought it was the Monty Hall problem because they did not properly explain the rules.

A good brief, but it’s a strategy that will only work against mathematicians.

I recognize the guy who originally presented this problem Maurice Ashley. He’s kinda awesome, he’s done some good work trying to encourage more black folks to play chess and has some writing on the subject on whether or not he’s the first black chess grandmaster and why that ultimately doesn’t matter.

But, I really like when he did this:

I expected someone to challenge my assertion so now I’m just gonna post my TED talk and get it out of my head.

The conclusion reached by the video is essentially “players will only perform actions that give them the highest chance of winning, thus never perform an action your opponent wants you to perform in a zero-sum game”

completely correct, in raw mathematical game theory.

It would be more accurate to state “players will only perform actions *they perceive* to give them the highest chance of winning…” We will omit cheating and griefing from a list of potential actions for simplicity’s sake. In a game space where a player’s perception of the optimal move can be wrong, we increase the possibility space significantly. Thankfully the defined space in the game is 1-6

A side note, the game in the example is never fully defined, I assume to make the conclusion stronger. We are provided an example “you have cards 1-6, you each draw 1 card, your opponent offers to trade, do you accept?” Which is then doubled back on by declaring an undeclared “rule” that your opponent can decide whether or not to offer a trade.

From this example I am *assuming* the rules of the game are thus: There are cards 1-6. Each player is secretly assigned one card. After receiving cards the players may trade cards if they both agree. Once players have traded or agreed not to trade, the cards a revealed and whoever has the higher card wins.

In this possibility space there are multiple scenarios in which both players would want to trade and still agree to trade after knowing their opponent wants to trade.

First, like in the video, we will assume a player will never trade a six. You have a 100% chance of winning if you hold a six.

We will not assume the same recursively for every other card. If you hold a 5, you have a 20% chance to receive a higher card if traded. Ignore the chance of receiving a higher card for now, we will get back to it. Because there is a chance that trading will improve your chance of winning we will assume a theoretical player may trade a 5 *a nonzero number of times*.

Second, in this game, trading your card effectively inverses your chance to win/lose. If you have a 2, you have a 20% (1) to win and a 80% (3, 4, 5, 6) chance to lose. This is equivalent to your chances to receive a better card when trading. Broken down we have:

Chance to win

(1) card: 0%

(2) card: 20%

(3) card: 40%

(4) card: 60%

(5) card: 80%

Chance to win when trading

(1) card: 100%

(2) card: 80%

(3) card: 60%

(4) card: 40%

(5) card: 20%

It is thus logical to assume, *even if we assume both players play optimally*, that both players will agree to a trade when both have some combination of 1, 2, or 3 because they *both* have a higher chance to win when trading *assuming a nonzero number of players will trade a 4 or 5*.

Remember the initial conclusion that players will only perform actions they *perceive* grants them the highest chance of winning.We can assume there are a very small number or opponents who will trade when they have a 4 and/or 5 because there is still a chance to increase their chance of winning when trading (they are ether ignorant of their chance of losing or don’t care). Our strategy becomes *more effective*, even only infinitesimally.

If we were to create a simulation with a distribution of strategies “only trade when you have 1”, “only trade when you have a 1 or 2”, etc. and “never trade”, you would see the win percentage of the “never trade” strategy at it’s highest with the highest concentration of “never trade” strategies and then drop as it becomes less popular. In a simulation space where the only strategy is “never trade”, every player will approach a 50% win ratio.

Like Diplomacy, the primary (really the only) strategy in this game is *how well you can convince your opponent to do something*. If your strategy is “never do anything a player suggests” your win % will never exceed that of players who do not follow this strategy.

TL;DR: Radmad’s post. Game Theory assumes all players are perfectly rational actors, which in IRL, they are not.

Even with this looser-than-game-theory model, you do not have a 20% to receive a higher card if you trade a five because you asserted that all players will never trade a six.

Yup. The way I see it is:

Six: Never offer to trade. And never trade or you instantly lose. Never offer to trade.

Five: never trade, because if anyone trades with you they don’t have a six. Never offer to trade.

Four: never trade, because if anyone trades with you they don’t have either a six or a five. Never offer to trade.

Three: If you offer to trade, you’re giving away the fact that you have a three or lower. If someone accepts your offer, you know they will have a two or a one.

Two: Goes for the same as above, but they will only accept your offer if they have a one.

One: Always offer to trade.

I’ll confess I haven’t watched the video, but I’m pretty sure this would be the standard game theory way to analyze it.

I’m actually rather curious now, how someone would analyze this game when it’s presented to them for the first time, if no rules were omitted like they were in the video.

If I were to present this game to a seasoned player of games and when it comes time to explain trading. Say, “at this phase both players have the option to offer a trade and the other player can accept or decline” and then proceed to lay out the scenario with the 2, would they, or perhaps to say what I really mean, would I come to the same conclusion as the video?

True, although it just changes the math to 25% per card rank.