## Thursday, September 13, 2018

### The double dice problem

Suppose I have a box that contains one each of 4-sided, 6-sided, 8-sided, and 12-sided dice. I choose a die at random, and roll it twice without letting you see the die or the outcome. I report that I got the same outcome on both rolls.

1) What is the posterior probability that I rolled each of the dice?
2) If I roll the same die again, what is the probability that I get the same outcome a third time?

You can see the complete solution in this Jupyter notebook, or read the HTML version here.

Solution

Here's a `BayesTable` that represents the four hypothetical dice.
In :
```hypo = [Fraction(sides) for sides in [4, 6, 8, 12]]
table = BayesTable(hypo)
```
Out:
hypo prior likelihood unnorm posterior
0 4 1 NaN NaN NaN
1 6 1 NaN NaN NaN
2 8 1 NaN NaN NaN
3 12 1 NaN NaN NaN

Since we didn't specify prior probabilities, the default value is equal priors for all hypotheses. They don't have to be normalized, because we have to normalize the posteriors anyway.
Now we can specify the likelihoods: if a die has `n` sides, the chance of getting the same outcome twice is `1/n`.
So the likelihoods are:
In :
```table.likelihood = 1/table.hypo
table
```
Out:
hypo prior likelihood unnorm posterior
0 4 1 1/4 NaN NaN
1 6 1 1/6 NaN NaN
2 8 1 1/8 NaN NaN
3 12 1 1/12 NaN NaN
Now we can use `update` to compute the posterior probabilities:
In :
```table.update()
table
```
Out:
hypo prior likelihood unnorm posterior
0 4 1 1/4 1/4 2/5
1 6 1 1/6 1/6 4/15
2 8 1 1/8 1/8 1/5
3 12 1 1/12 1/12 2/15
In :
```table.posterior.astype(float)
```
Out:
```0    0.400000
1    0.266667
2    0.200000
3    0.133333
Name: posterior, dtype: float64```
The 4-sided die is most likely because you are more likely to get doubles on a 4-sided die than on a 6-, 8-, or 12- sided die.

### Part two

The second part of the problem asks for the (posterior predictive) probability of getting the same outcome a third time, if we roll the same die again.
If the die has `n` sides, the probability of getting the same value again is `1/n`, which should look familiar.
To get the total probability of getting the same outcome, we have to add up the conditional probabilities:
``P(n | data) * P(same outcome | n)``
The first term is the posterior probability; the second term is `1/n`.
In :
```total = 0
for _, row in table.iterrows():
total += row.posterior / row.hypo

total
```
Out:
`Fraction(13, 72)`
This calculation is similar to the first step of the update, so we can also compute it by
1) Creating a new table with the posteriors from `table`.
2) Adding the likelihood of getting the same outcome a third time.
3) Computing the normalizing constant.
In :
```table2 = table.reset()
table2.likelihood = 1/table.hypo
table2
```
Out:
hypo prior likelihood unnorm posterior
0 4 2/5 1/4 NaN NaN
1 6 4/15 1/6 NaN NaN
2 8 1/5 1/8 NaN NaN
3 12 2/15 1/12 NaN NaN
In :
```table2.update()
```
Out:
`Fraction(13, 72)`
In :
```table2
```
Out:
hypo prior likelihood unnorm posterior
0 4 2/5 1/4 1/10 36/65
1 6 4/15 1/6 2/45 16/65
2 8 1/5 1/8 1/40 9/65
3 12 2/15 1/12 1/90 4/65
This result is the same as the posterior after seeing the same outcome three times.
This example demonstrates a general truth: to compute the predictive probability of an event, you can pretend you saw the event, do a Bayesian update, and record the normalizing constant.
(With one caveat: this only works if your priors are normalized.)