Thursday, September 13, 2018

Tom Bayes and the case of the double dice


The double dice problem

Suppose I have a box that contains one each of 4-sided, 6-sided, 8-sided, and 12-sided dice. I choose a die at random, and roll it twice without letting you see the die or the outcome. I report that I got the same outcome on both rolls.

1) What is the posterior probability that I rolled each of the dice?
2) If I roll the same die again, what is the probability that I get the same outcome a third time?

You can see the complete solution in this Jupyter notebook, or read the HTML version here.

Solution

Here's a BayesTable that represents the four hypothetical dice.
In [3]:
hypo = [Fraction(sides) for sides in [4, 6, 8, 12]]
table = BayesTable(hypo)
Out[3]:
hypo prior likelihood unnorm posterior
0 4 1 NaN NaN NaN
1 6 1 NaN NaN NaN
2 8 1 NaN NaN NaN
3 12 1 NaN NaN NaN

Since we didn't specify prior probabilities, the default value is equal priors for all hypotheses. They don't have to be normalized, because we have to normalize the posteriors anyway.
Now we can specify the likelihoods: if a die has n sides, the chance of getting the same outcome twice is 1/n.
So the likelihoods are:
In [4]:
table.likelihood = 1/table.hypo
table
Out[4]:
hypo prior likelihood unnorm posterior
0 4 1 1/4 NaN NaN
1 6 1 1/6 NaN NaN
2 8 1 1/8 NaN NaN
3 12 1 1/12 NaN NaN
Now we can use update to compute the posterior probabilities:
In [5]:
table.update()
table
Out[5]:
hypo prior likelihood unnorm posterior
0 4 1 1/4 1/4 2/5
1 6 1 1/6 1/6 4/15
2 8 1 1/8 1/8 1/5
3 12 1 1/12 1/12 2/15
In [6]:
table.posterior.astype(float)
Out[6]:
0    0.400000
1    0.266667
2    0.200000
3    0.133333
Name: posterior, dtype: float64
The 4-sided die is most likely because you are more likely to get doubles on a 4-sided die than on a 6-, 8-, or 12- sided die.

Part two

The second part of the problem asks for the (posterior predictive) probability of getting the same outcome a third time, if we roll the same die again.
If the die has n sides, the probability of getting the same value again is 1/n, which should look familiar.
To get the total probability of getting the same outcome, we have to add up the conditional probabilities:
P(n | data) * P(same outcome | n)
The first term is the posterior probability; the second term is 1/n.
In [7]:
total = 0
for _, row in table.iterrows():
    total += row.posterior / row.hypo
    
total
Out[7]:
Fraction(13, 72)
This calculation is similar to the first step of the update, so we can also compute it by
1) Creating a new table with the posteriors from table.
2) Adding the likelihood of getting the same outcome a third time.
3) Computing the normalizing constant.
In [8]:
table2 = table.reset()
table2.likelihood = 1/table.hypo
table2
Out[8]:
hypo prior likelihood unnorm posterior
0 4 2/5 1/4 NaN NaN
1 6 4/15 1/6 NaN NaN
2 8 1/5 1/8 NaN NaN
3 12 2/15 1/12 NaN NaN
In [9]:
table2.update()
Out[9]:
Fraction(13, 72)
In [10]:
table2
Out[10]:
hypo prior likelihood unnorm posterior
0 4 2/5 1/4 1/10 36/65
1 6 4/15 1/6 2/45 16/65
2 8 1/5 1/8 1/40 9/65
3 12 2/15 1/12 1/90 4/65
This result is the same as the posterior after seeing the same outcome three times.
This example demonstrates a general truth: to compute the predictive probability of an event, you can pretend you saw the event, do a Bayesian update, and record the normalizing constant.
(With one caveat: this only works if your priors are normalized.)




No comments:

Post a Comment