views:

75

answers:

4

I have an interesting conceptual problem, and I'm wondering if anyone can help me quantify it. Basically, I'm playing a set of games... and for each game I know the probability that I will win, the probability that I will tie, and the probability that I will lose (each game will have different probabilities).

At a high level, what I want to know is: which games should I focus my attention on? For example, I'm not going to put any effort into games that I have a 0% chance of winning (or games that I have a 100% chance of winning). But for a 50/50 game, I will care a lot and want to put in the most effort. If ties were not involved, it would be as simple as: "care-ability" = how close is my chance of winning to 50%? But with ties, it complicates things.

I'm not sure it's strictly necessary, but if you need to, you can assume that a win is 0 points, a tie would give you 1 point, and a win would give you 2 points. In other words, it would be just as valuable to go from a loss to a tie, as it would to go from a tie to a win.

You can also assume that all games are independent. Basically, I'm just looking for a quantitative metric for "care-ability" (a value from 0 to 1 for example).

Anybody have any ideas for how to approach something like this? If you're an economics person, you can imagine I have a finite number of dollars I can spend on improving my chances of winning games. How would you allocate those dollars across the games in order to maximize your expected outcomes?

Thanks in advance!

EDIT: Sorry, I've since realized that this was a fairly poorly phrased question. I don't specify the relationship between additional investment and produced outcome. I wanted to assume it was a linear relationship, but in that case, it doesn't matter which game you invest in, since it will always increase your expected value the same way. My actual problem is a little more complicated, and I need to rethink it a bit. Thanks to everyone who helped and gave great ideas!

A: 

But for a 50/50 game, I will care a lot and want to put in the most effort. If ties were not involved, it would be as simple as: "care-ability" = how close is my chance of winning to 50%? But with ties, it complicates things.

i don't think so. if you're looking for a game with 50/50 win-chance, isn't it just calculating "how close is my chance of winning plus half of the chance to tie to 50%" - or have i missunderstood your question?

EDIT:

the formula would look like this:

x = 1 - abs(0.5-abs(win% + tie%/2));
                 ^ the inner 'abs' here may be useless, but i'm not sure ;)
oezi
Can you explicitly write out the formula you're talking about? I think this might be on the right track...
Kenny
for example, without ties, my formula would be: "care-ability" = 1 - abs(win% - 0.5) / 2
Kenny
A: 

If the games are as simple as you state (though you imply otherwise) then the value to you of each game is your expected winnings. For a game in which the prize is 2 for a win, 1 for a tie and 0 for a loss, and if the odds are 0.4,0.3,0.3 then your expected winnings are (0.4x2)+(0.3x1)=1.1. Multiply the probability of each outcome by the value of each outcome and add up the results. Of course I omitted adding the value of (0.3x0) that you get from losing.

If the games are as simple as you state then you play the ones you expect to win most from.

High Performance Mark
this will give his 100%-winning-games the highest value - which he doesn't want if i understand him right...
oezi
right... you wouldn't want to spend money/effort/whatever on a game you were guaranteed to win.
Kenny
@oezi -- you're absolutely right if all the games have the same prizes, but a 0.5 chance of winning 100 is worth more than a 1.0 chance of winning 1. But I don't think that the question is well framed -- to maximise your winnings you just play (repeatedly) the (unique) game which has the highest expected value.
High Performance Mark
@High Performance Mark: Sorry, to clarify: you will play each game once, and only once
Kenny
A: 

The most obvious thing that I can think that you need to consider is how much resource (effort, dollars, whatever) is needed to change the probabilities?

Using dollars as an easy example if you have a game that you currently have 0% chance to win but $1 will give you a 50% chance to win then that is a better option than if that $1 will make a 50% chance to win into a 99% chance.

In broad terms I think you need to apply a value to the win/tie/loss of each game (as you have already mentioned). Then you can work out your current expected total value (eg 50%win, 25% draw and 25% loss will give 0.5*2+0.25*1+0.25*0 = 1.25 expected points). The aim would be to then use all your resource to improve the total expected value as much as possible.

This last step is wholely dependant on your resource to success function. Analysis of this function might make it an easy solution.

Some example effort formulae:

1) Linear - one unit of resource will increase your probability to win and draw by X.

This will mean that it doesn't matter where you put the effort as long as you haven't already elimitated the chance of losing. Put the effort into any game that you might lose.

2) Inverse - the lower your chance to win/draw the higher your benefit

If one unit of effort produces "X/win chance" increase to win chance then you are clearly going to get most benefit from boosting your worst games.

3) Midpoint tendancies - the closer you are to an equal win/lose the more you benefit

This simulates the fact that games were you are very likely to win or very likely to lose are least likely to be improvable (if somebody is that much better than you the effort probably isn't important). In this scenario you would want to concentrate on those with as near to equal win/lose chance to try to get maximum increase.

I hope this makes sense. :)

Chris
+1  A: 

You can formulate this as a constrained optimization problem.

I'm going to ignore draws for now...

So what you need to do is first let a_i be the amount you spend on game i.

The chance of winning game i is presumably a function of a_i .. call it p_i(a_i)

Your expected payout for game i is 2 * p_i(a_i)

So your total expected payout is P = 2* Sum( p_i(a_i) )

You have some constraint on the amount you spend... sum(a_i) = A

Your aim is to maximise P subject to the constraint.

Using the Lagrange method this gives you N+1 equations to solve simultaneously, for the unknowns a_i and lambda.

N equations like this:

 2 p_i'(a_i) = lambda  

And the one constraint equation

 sum(a_i) = total

How you solve these are going to depend on the structure of your p_i functions. Depending on your structure or the p_i functions you may need to introduce the aditional constraint that each a_i > 0. I'd try to structure my p_i's to avoid that as it makes solving the equations much harder.

If you wanted to introdue the chance of a draw you'd split your p_i(a_i) into w_i(a_i) and d_i(a_i) and change your payout per game to 2 * w_i(a_i) + 1 * d_i(a_i) .. though this doesn't change any of the core maths.

Michael Anderson