His first principle of modeling hints at some Bayesian ideas:
Principle 1: A good model should be probabilistic, not deterministic.Silver suggests that it is more useful to report the probability of winning than the margin of victory. Bayesian models are generally good at this kind of thing; classical statistical methods are not. But Silver never makes the connection between this principle and Bayesian methods; in fact the article doesn't mention Bayes at all.
The FiveThirtyEight model produces probabilistic forecasts as opposed to hard-and-fast predictions.[...] the FiveThirtyEight model might estimate the Democrat has a 20 percent chance of winning the Senate race in Kentucky. My view is that this is often the most important part of modeling — and often the hardest part.
And that's fine; it was not central to the point of his article. But since I am teaching my Bayesian statistics class this semester, I will take this opportunity to fill in some details. I don't know anything about Silver's model other than what's in his article, but I think it is a good guess that there is something in there similar to what follows.
But before I get into it, here's an outline:
- I present an example problem and formulate a solution using a Bayesian framework.
- I develop Python code to compute a solution; if you don't speak Python, you can skip this part.
- I show results for an update with a single poll result.
- I show how to combine results from a different polls.
My example supports Silver's argument that it is more useful to predict the probability of winning than the margin of victory: after the second update, the predicted margin of victory decreases, but the probability of winning increases. In this case, predicting only the margin of victory would misrepresent the effect of the second poll.
Formulating the problem
Here's the exercise I presented to my class:
Exercise 1: The polling company Strategic Vision reports that among likely voters, 53% intend to vote for your favorite candidate and 47% intend to vote for the opponent (let's ignore undecided voters for now). Suppose that, based on past performance, you estimate that the distribution of error for this company has mean 1.1 percentage point (in favor of your candidate) and standard deviation 3.7 percentage points. What is the probability that the actual fraction of likely voters who favor your candidate is less than 50%?Strategic Vision is an actual polling company, but other than that, everything about this example is made up. Also, the standard deviation of the error is probably lower than what you would see in practice.
To solve this problem, we can treat the polling company like a measurement instrument with known error characteristics. If we knew the actual fraction of the electorate planning to vote for your candidate, which I'll call A for "actual", and we knew the distribution of the error, ε, we could compute the distribution of the measured value, M:
M = A + ε
But in this case we want to solve the inverse problem: given a measurement M and the distribution of ε, compute the posterior distribution of A. As always with this kind of problem, we need three things:
1) A prior distribution for A,
2) Data that allow us to improve the estimate of A, and
3) A likelihood function that computes the probability of the data for hypothetical values of A.
Once you have these three things, the Bayesian framework does the rest.
Implementing the solution
To demonstrate, I use the Suite class from thinkbayes2, which is a Python module that goes with my book, Think Bayes. The Suite class is documented here, but I will explain what you need to know below. You can download the code from this file in this GitHub repository.I'll start by defining a new class called Electorate that inherits methods from thinkbayes2.Suite:
class Electorate(thinkbayes2.Suite):
"""Represents hypotheses about the state of the electorate."""
As a starting place, I'll create a uniform prior distribution. In practice this would not be a good choice, for reasons I'll explain soon, but it will allow me to make a point.
hypos = range(0, 101)
suite = Electorate(hypos)
hypos is a list of integers from 0 to 100, representing the percentage of the electorate planning to vote for your candidate.
I'll represent the data with a tuple of three values: the estimated bias of the polling company in percentage points, the standard deviation of their previous errors (also in percentage points), and the result of the poll:
data = 1.1, 3.7, 53
suite.Update(data)
When we call Update, it loops through the hypotheses and computes the likelihood of the data under each hypothesis; we have to provide a Likelihood function that does this calculation:
class Electorate(thinkbayes2.Suite):
"""Represents hypotheses about the electorate."""
def Likelihood(self, data, hypo):
"""Likelihood of the data under the hypothesis.
hypo: fraction of the population
data: poll results
"""
bias, std, result = data
error = result - hypo
like = thinkbayes2.EvalNormalPdf(error, bias, std)
return like
Likelihood unpacks the given data into bias, std, and result. Given a hypothetical value for A, it computes the hypothetical error. For example, if hypo is 50 and result is 53, that means the poll is off by 3 percentage points. The resulting likelihood is the probability that we would be off by that much, given the bias and standard deviation of the poll.
We estimate this probability by evaluating the normal/Gaussian distribution with the given parameters. I am assuming that the distribution of errors is approximately normal, which is probably not a bad assumption when the probabilities are near 50%.
One technical detail: The result of EvalNormalPdf is actually a probability density, not a probability. But the result from Likelihood doesn't actually have to be a probability; it only has to be proportional to a probability, so a probability density will do the job nicely.
The results
And that's it -- we've solved the problem! Here are the results:
The prior distribution is uniform from 0 to 100. The mean of the posterior is 51.9, which makes sense because the result is 53 and the known bias is 1.1, so the posterior mean is (53 - 1.1). The standard deviation of the posterior is 3.7, the same as the standard deviation of the error.
To compute the probability of losing the election (if it were held today), we can loop through the hypotheses and add up the probability of all values less than 50%. The Suite class provides ProbLess, which does that calculation. The result is 0.26, which means your candidate is a 3:1 favorite.
To compute the probability of losing the election (if it were held today), we can loop through the hypotheses and add up the probability of all values less than 50%. The Suite class provides ProbLess, which does that calculation. The result is 0.26, which means your candidate is a 3:1 favorite.
In retrospect we could have computed this posterior analytically with a lot less work, which is the point I wanted to make by using a uniform prior. But in general it's not quite so simple, as we can see by incorporating a second poll:
Exercise 2: What if another poll comes out from Research 2000 showing that 49% of likely voters intend to vote for your candidate, but past poll show that this company's results tend to favor the opponent by 2.3 points, and their past errors (after correcting for this bias) have standard deviation 4.1 points. Now what should you believe?
The second update looks just like the first:
data = -2.3, 4.1, 49
suite.Update(data)
data = -2.3, 4.1, 49
suite.Update(data)
The bias is negative now because this polling company (in my fabricated world) tends to favor the opponent. Here are the results after the second update:
The mean of the new posterior is 51.6, slightly lower than the mean after the first update, 51.9. The two polls are actually consistent with each other after we correct for the biases of the two companies.
The predicted margin of victory is slightly smaller, but the uncertainty of the prediction is also smaller. Based on the second update, the probability is 0.22, which means your candidate is now nearly a 4:1 favorite.
Again, this example demonstrates Silver's point: predicting the probability of winning is more meaningful that predicting the margin of error. And that's exactly the kind of thing Bayesian models are good for.
One more technical note: This analysis is based on the assumption that errors in one poll iare independent of errors in another. It seems likely that in practice there is correlation between polls; in that case we could extend this solution to model the errors with a joint distribution that includes the correlation.
The predicted margin of victory is slightly smaller, but the uncertainty of the prediction is also smaller. Based on the second update, the probability is 0.22, which means your candidate is now nearly a 4:1 favorite.
Again, this example demonstrates Silver's point: predicting the probability of winning is more meaningful that predicting the margin of error. And that's exactly the kind of thing Bayesian models are good for.
One more technical note: This analysis is based on the assumption that errors in one poll iare independent of errors in another. It seems likely that in practice there is correlation between polls; in that case we could extend this solution to model the errors with a joint distribution that includes the correlation.
No comments:
Post a Comment