In preparation for an alien invasion, the Earth Defense League has been working on new missiles to shoot down space invaders. Of course, some missile designs are better than others; let's assume that each design has some probability of hitting an alien ship, x.
Based on previous tests, the distribution of x in the population of designs is roughly uniform between 10% and 40%. To approximate this distribution, we'll assume that x is either 10%, 20%, 30%, or 40% with equal probability.
Now suppose the new ultra-secret Alien Blaster 10K is being tested. In a press conference, an EDF general reports that the new design has been tested twice, taking two shots during each test. The results of the test are confidential, so the general won't say how many targets were hit, but they report: ``The same number of targets were hit in the two tests, so we have reason to think this new design is consistent.''
Is this data good or bad; that is, does it increase or decrease your estimate of x for the Alien Blaster 10K?
Now here's a solution:
I'll start by creating a
Pmf
that represents the four hypothetical values of x
:
In [7]:
pmf = Pmf([0.1, 0.2, 0.3, 0.4])
pmf.Print()
Before seeing the data, the mean of the distribution, which is the expected effectiveness of the blaster, is 0.25.
In [8]:
pmf.Mean()
Out[8]:
Here's how we compute the likelihood of the data. If each blaster takes two shots, there are three ways they can get a tie: they both get 0, 1, or 2. If the probability that either blaster gets a hit is x, the probabilities of these outcomes are:
both 0: (1-x)**4
both 1: (2 * x * (1-x))**2
both 2: x**4
Here's the likelihood function that computes the total probability of the three outcomes:
In [9]:
def likelihood(hypo, data):
"""Likelihood of the data under hypo.
hypo: probability of a hit, x
data: 'tie' or 'no tie'
"""
x = hypo
like = x**4 + (2 * x * (1-x))**2 + (1-x)**4
if data == 'tie':
return like
else:
return 1-like
To see what the likelihood function looks like, I'll print the likelihood of a tie for the four hypothetical values of
x
:
In [10]:
data = 'tie'
for hypo in sorted(pmf):
like = likelihood(hypo, data)
print(hypo, like)
If we multiply each likelihood by the corresponding prior, we get the unnormalized posteriors:
In [11]:
for hypo in sorted(pmf):
unnorm_post = pmf[hypo] * likelihood(hypo, data)
print(hypo, pmf[hypo], unnorm_post)
Finally, we can do the update by multiplying the priors in
pmf
by the likelihoods:
In [12]:
for hypo in pmf:
pmf[hypo] *= likelihood(hypo, data)
And then normalizing
pmf
. The result is the total probability of the data.
In [13]:
pmf.Normalize()
Out[13]:
And here are the posteriors.
In [14]:
pmf.Print()
The lower values of
x
are more likely, so this evidence makes us downgrade our expectation about the effectiveness of the blaster. The posterior mean is 0.225, a bit lower than the prior mean, 0.25.
In [15]:
pmf.Mean()
Out[15]:
A tie is evidence in favor of extreme values of
x
.Finally, here's how we can solve the problem using the Bayesian update worksheet:
In the next article, I'll present my solution to The Skeet Shooting problem.
No comments:
Post a Comment