Wednesday, March 21, 2012

The sun will probably come out tomorrow

I am always looking for good examples of Bayesian analysis, so I was interested in this paragraph from The Economist (September 2000):
"The canonical example is to imagine that a precocious newborn observes his first sunset, and wonders whether the sun will rise again or not. He assigns equal prior probabilities to both possible outcomes, and represents this by placing one white and one black marble into a bag. The following day, when the sun rises, the child places another white marble in the bag. The probability that a marble plucked randomly from the bag will be white (ie, the child’s degree of belief in future sunrises) has thus gone from a half to two-thirds. After sunrise the next day, the child adds another white marble, and the probability (and thus the degree of belief) goes from two-thirds to three-quarters. And so on. Gradually, the initial belief that the sun is just as likely as not to rise each morning is modified to become a near-certainty that the sun will always rise."
This example made me wonder about two things:

1) Although they call it a "canonical example", I had not heard it before, so I wondered where it came from, and

2) Although the example demonstrates the general idea of updating beliefs in light of new evidence, it is not obvious that the hypothetical newborn is actually doing a Bayesian update.

At the risk of spoiling the fun, here is what I found:

1) The example is from Richard Price's commentary on the original essay, discovered after Bayes's death, that presented what came to be called Bayes's Theorem.  So I am embarrassed that I didn't know it.

2) The analysis is correct for a particular prior distribution and for a particular interpretation of the posterior, but presented in a way that obscures these details.

A false start

In fact, it is not easy to formulate this example in a Bayesian framework.  One option (which fails) is to compute posterior probabilities for two hypotheses: either A (the sun will rise tomorrow) or B (the sun will not rise tomorrow).

We are given the priors: P(A) = P(B) = 1/2.

But when the sun rises in the morning, how do we compute the posteriors?  We need the likelihoods; that is, the probability of the evidence (sunrise) under the two hypotheses.  It is hard to make sense of these likelihoods, so we conclude that this formulation of the problem isn't working.

The beta distribution

A more fruitful (and more complicated) alternative is to imagine that the sun will rise with some probability, p, and try to estimate p.

If we know nothing about p, we could choose a prior distribution that is uniform between 0 and 1.  This is a reasonable choice, but it is not the only choice.  For example, we could instead make the odds (rather than the probabilities) uniform over a range of values, which has the intuitive appeal of giving more prior weight to values near 0 and 1.

But starting with uniform probabilities is equivalent to starting with a Beta distribution with parameters α β = 1, which has a mean value of α / (α + β), or 1/2.  And the Beta distribution has the nice property of being a conjugate prior, which means that after an update the posterior is also a Beta distribution.

We can show (but I won't) that observing a success has the effect of increasing α by 1, and observing a failure increases β by 1.  And that's where the white and black marbles come in.  The author of the example is using the marbles as a sneaky way to talk about a Beta distribution.

After one sunrise, the newborn's posterior belief about p is a Beta distribution with α=2, β=1, which looks like this:
This represents the newborn's posterior belief about p, the probability of sunrise.  At the far left, the hypothesis that p=0 has been refuted.  At the far right, the most likely value (after one success) is p=1.  A 90% credible interval is [0.225, 0.975], which indicates that the newborn is still very unsure about p.  But if we insist on a single-value estimate, he or she might reasonably compute the mean of the posterior, which is 2/3.

Similarly, after two successful sunrises, the posterior looks like this:
And the mean value of p is 3/4.  So the marble method is correct after all!  Well, sort of.  It is correct if we assume a uniform prior for p, and if we use the mean of the posterior to generate a single-point estimate.  For both decisions there are reasonable alternatives; for example, after any number of successes (without a failure) the maximum likelihood estimate of p is 1.

In summary, this example demonstrates the general idea of a Bayesian update, but think the way the calculation is presented is misleading.

Price's version

As I said, the example is from Price's comments on Bayes's article.  Here is Price's version, from the page numbered 409 [in the original typeface, the letter "s" looks like "f", but I will fpare you]:
"One example here it will not be amiss to give. 
"Let us image to ourselves the case of a person just brought forth into this world and left to collect from his observation of the order and course of events what powers and causes take place in it.  The Sun would, probably, be the first object that would engage his attention; but after losing it the first night, he would be entirely ignorant whether he should ever see it again.  He would therefore be in the condition of a person making a first experiment about an event entirely unknown to him.  But let him see a second appearance or one return of the Sun, and an expectation would be raised in him of a second return, and he might know that there was an odds of 3 to 1 for some probability of this.  This odds would increase, as before represented, with the number of returns to which he was witness.  But no finite number of returns would be sufficient to produce absolute or physical certainty.  For let it be supposed that he has seen it return at regular and stated intervals a million of times.  The conclusions this would warrant would be such as follow --- There would be the odds of the millioneth power of 2, to one, that it was likely that it would return again at the end of the usual interval."
To interpret "an odds of 3 to 1 for some probability of this" we have to look back to page 405, which computes the odds "for somewhat more than an even chance that it would happen on a second trial;" that is, the odds that p > 0.5.  In the example, the odds are 3:1, which corresponds to a probability of 0.75.

If we use the Beta distribution to compute the same probability, the result is also 0.75.  So Price's calculations are consistent with mine; he just uses a different way of summarizing them.

Wednesday, March 14, 2012

Bayesian statistics made simple

At PyCon last week I taught a tutorial on Bayesian statistics.  It is based on Chapters 5 and 8 of Think Stats.  Here is the web page I created for the tutorial.

And here, courtesy of PyCon and, is the video.  It's three hours long, so get comfortable!

Here's a screencap of me magically deriving Bayes's Theorem:

And here's the description and outline, from the PyCon page:


This tutorial is an introduction to Bayesian statistics using Python. My goal is to help participants understand the concepts and solve real problems. We will use material from my book, Think Stats: Probability and Statistics for Programmers (O’Reilly Media).


Bayesian statistical methods are becoming more common and more important, but there are not many resources to help beginners get started. People who know Python can use their programming skills to get a head start.
I will present simple programs that demonstrate the concepts of Bayesian statistics, and apply them to a range of example problems. Participants will work hands-on with example code and practice on example problems.
Students should have at least basic level Python and basic statistics. If you learned about Bayes’s Theorem and probability distributions at some time, that’s enough, even if you don’t remember it! Students should be comfortable with logarithms and plotting data on a log scale.
Students should bring a laptop with Python 2.x and matplotlib. You can work in any environment; you just need to be able to download a Python program and run it.
  1. Bayes’s theorem.
  2. Representing probability distributions.
  3. Bayesian estimation.
  4. Biased coins and student test scores.
  5. Censored data.
  6. The locomotive / German tank problem.
  7. Hierarchical models and the unseen species problem.