This article is an excerpt from

*Think Bayes*, a book I am working on. The entire current draft is available from http://thinkbayes.com. I welcome comments and suggestions.

In the previous article, I described presented

*The Price is Right*problem and a Bayesian approach to estimating the value of a showcase of prizes. This article picks up from there...

**Optimal bidding**

Now that we have a posterior distribution, we can use it to compute the optimal bid, which I define as the bid that maximizes expected gain.

To compute optimal bids, I wrote a class called

`GainCalculator`:

```
class GainCalculator(object):
def __init__(self, player, opponent):
self.player = player
self.opponent = opponent
```

`player`and

`opponent`are

`Player`objects.

`GainCalculator`provides

`ExpectedGains`, which computes a sequence of bids and the expected gain for each bid:

```
def ExpectedGains(self, low=0, high=75000, n=101):
bids = numpy.linspace(low, high, n)
gains = [self.ExpectedGain(bid) for bid in bids]
return bids, gains
```

`low`and

`high`specify the range of possible bids;

`n`is the number of bids to try. Here is the function that computes expected gain for a given bid:

```
def ExpectedGain(self, bid):
suite = self.player.posterior
total = 0
for price, prob in suite.Items():
gain = self.Gain(bid, price)
total += prob * gain
return total
```

`ExpectedGain`loops through the values in the posterior and computes the gain for each bid, given the actual prices of the showcase. It weights each gain with the corresponding probability and returns the total.

`Gain`takes a bid and an actual price and returns the expected gain:

```
def Gain(self, bid, price):
if bid > price:
return 0
diff = price - bid
prob = self.ProbWin(diff)
if diff <= 250:
return 2 * price * prob
else:
return price * prob
```

If you overbid, you get nothing. Otherwise we compute
the difference between your bid and the price, which determines
your probability of winning.If

`diff`is less than $250, you win both showcases. For simplicity, I assume that both showcases have the same price. Since this outcome is rare, it doesn’t make much difference.

Finally, we have to compute the probability of winning based on

`diff`:

```
def ProbWin(self, diff):
prob = (self.opponent.ProbOverbid() +
self.opponent.ProbWorseThan(diff))
return prob
```

If your opponent overbids, you win. Otherwise, you have to hope
that your opponent is off by more than `diff`.

`Player`provides methods to compute both probabilities:

```
# class Player:
def ProbOverbid(self):
return self.cdf_diff.Prob(-1)
def ProbWorseThan(self, diff):
return 1 - self.cdf_diff.Prob(diff)
```

This code might be confusing because the computation is now from
the point of view of the opponent, who is computing, “What is
the probability that I overbid?” and “What is the probability
that my bid is off by more than `diff`?”

Both answers are based on the CDF of

`diff`[CDFs are described here]. If your opponent’s

`diff`is less than or equal to -1, you win. If your opponent’s

`diff`is worse than yours, you win. Otherwise you lose.

Finally, here’s the code that computes optimal bids:

```
# class Player:
def OptimalBid(self, guess, opponent):
self.MakeBeliefs(guess)
calc = GainCalculator(self, opponent)
bids, gains = calc.ExpectedGains()
gain, bid = max(zip(gains, bids))
return bid, gain
```

Given a guess and an opponent, `OptimalBid`computes the posterior distribution, instantiates a

`GainCalculator`, computes expected gains for a range of bids and returns the optimal bid and expected gain. Whew!

Figure 6.4 shows the results for both players, based on a scenario where Player 1’s best guess is $20,000 and Player 2’s best guess is $40,000.

For Player 1 the optimal bid is $21,000, yielding an expected return of almost $16,700. This is a case (which turns out to be unusual) where the optimal bid is actually higher than the contestant’s best guess.

For Player 2 the optimal bid is $31,500, yielding an expected return of almost $19,400. This is the more typical case where the optimal bid is less than the best guess.

**Discussion**

**One of the most useful features of Bayesian estimation is that the result comes in the form of a posterior distribution. Classical estimation usually generates a single point estimate or a confidence interval, which is sufficient if estimation is the last step in the process, but if you want to use an estimate as an input to a subsequent analysis, point estimates and intervals are often not much help.**

In this example, the Bayesian analysis yields a posterior distribution we can use to compute an optimal bid. The gain function is asymmetric and discontinuous (if you overbid, you lose), so it would be hard to solve this problem analytically. But it is relatively simple to do computationally.

Newcomers to Bayesian thinking are often tempted to summarize the posterior distribution by computing the mean or the maximum likelihood estimate. These summaries can be useful, but if that’s all you need, then you probably don’t need Bayesian methods in the first place.

Bayesian methods are most useful when you can carry the posterior distribution into the next step of the process to perform some kind of optimization, as we did in this chapter, or some kind of prediction, as we will see in the next chapter [which you can read here].

## No comments:

## Post a Comment