The double dice problem
Suppose I have a box that contains one each of 4-sided, 6-sided, 8-sided, and 12-sided dice. I choose a die at random, and roll it twice without letting you see the die or the outcome. I report that I got the same outcome on both rolls.1) What is the posterior probability that I rolled each of the dice?
2) If I roll the same die again, what is the probability that I get the same outcome a third time?
You can see the complete solution in this Jupyter notebook, or read the HTML version here.
Here's a
BayesTable
that represents the four hypothetical dice.
In [3]:
hypo = [Fraction(sides) for sides in [4, 6, 8, 12]]
table = BayesTable(hypo)
Out[3]:
Since we didn't specify prior probabilities, the default value is equal priors for all hypotheses. They don't have to be normalized, because we have to normalize the posteriors anyway.
Now we can specify the likelihoods: if a die has
n
sides, the chance of getting the same outcome twice is 1/n
.So the likelihoods are:
In [4]:
table.likelihood = 1/table.hypo
table
Out[4]:
Now we can use
update
to compute the posterior probabilities:
In [5]:
table.update()
table
Out[5]:
In [6]:
table.posterior.astype(float)
Out[6]:
The 4-sided die is most likely because you are more likely to get doubles on a 4-sided die than on a 6-, 8-, or 12- sided die.
If the die has
To get the total probability of getting the same outcome, we have to add up the conditional probabilities:
Part two
The second part of the problem asks for the (posterior predictive) probability of getting the same outcome a third time, if we roll the same die again.If the die has
n
sides, the probability of getting the same value again is 1/n
, which should look familiar.To get the total probability of getting the same outcome, we have to add up the conditional probabilities:
P(n | data) * P(same outcome | n)
The first term is the posterior probability; the second term is 1/n
.
In [7]:
total = 0
for _, row in table.iterrows():
total += row.posterior / row.hypo
total
Out[7]:
This calculation is similar to the first step of the update, so we can also compute it by
1) Creating a new table with the posteriors from
2) Adding the likelihood of getting the same outcome a third time.
3) Computing the normalizing constant.
1) Creating a new table with the posteriors from
table
.2) Adding the likelihood of getting the same outcome a third time.
3) Computing the normalizing constant.
In [8]:
table2 = table.reset()
table2.likelihood = 1/table.hypo
table2
Out[8]:
In [9]:
table2.update()
Out[9]:
In [10]:
table2
Out[10]:
This result is the same as the posterior after seeing the same outcome three times.
This example demonstrates a general truth: to compute the predictive probability of an event, you can pretend you saw the event, do a Bayesian update, and record the normalizing constant.
(With one caveat: this only works if your priors are normalized.)
(With one caveat: this only works if your priors are normalized.)
No comments:
Post a Comment