Wednesday, April 24, 2013

The Price is Right Problem: Part Two


This article is an excerpt from Think Bayes, a book I am working on.  The entire current draft is available from http://thinkbayes.com.  I welcome comments and suggestions.

In the previous article, I described presented The Price is Right problem and a Bayesian approach to estimating the value of a showcase of prizes.  This article picks up from there...


Optimal bidding

Now that we have a posterior distribution, we can use it to compute the optimal bid, which I define as the bid that maximizes expected gain.

To compute optimal bids, I wrote a class called GainCalculator:

class GainCalculator(object):

    def __init__(self, player, opponent):
        self.player = player
        self.opponent = opponent

player and opponent are Player objects.

GainCalculator provides ExpectedGains, which computes a sequence of bids and the expected gain for each bid:

    def ExpectedGains(self, low=0, high=75000, n=101):
        bids = numpy.linspace(low, high, n)

        gains = [self.ExpectedGain(bid) for bid in bids]

        return bids, gains

low and high specify the range of possible bids;
n is the number of bids to try. Here is the function
that computes expected gain for a given bid:

    def ExpectedGain(self, bid):
        suite = self.player.posterior
        total = 0
        for price, prob in suite.Items():
            gain = self.Gain(bid, price)
            total += prob * gain
        return total

ExpectedGain loops through the values in the posterior
and computes the gain for each bid, given the actual prices of
the showcase. It weights each gain with the corresponding
probability and returns the total.

Gain takes a bid and an actual price and returns the expected gain:

    def Gain(self, bid, price):
        if bid > price:
            return 0

        diff = price - bid
        prob = self.ProbWin(diff)

        if diff <= 250:
            return 2 * price * prob
        else:
            return price * prob

If you overbid, you get nothing. Otherwise we compute 
the difference between your bid and the price, which determines
your probability of winning.

If diff is less than $250, you win both showcases. For simplicity, I assume that both showcases have the same price. Since this outcome is rare, it doesn’t make much difference.

Finally, we have to compute the probability of winning based on diff:

    def ProbWin(self, diff):
        prob = (self.opponent.ProbOverbid() + 
                self.opponent.ProbWorseThan(diff))
        return prob

If your opponent overbids, you win. Otherwise, you have to hope
that your opponent is off by more than diff. Player
provides methods to compute both probabilities:

# class Player:

    def ProbOverbid(self):
        return self.cdf_diff.Prob(-1)

    def ProbWorseThan(self, diff):
        return 1 - self.cdf_diff.Prob(diff)

This code might be confusing because the computation is now from
the point of view of the opponent, who is computing, “What is
the probability that I overbid?” and “What is the probability
that my bid is off by more than diff?”

Both answers are based on the CDF of diff [CDFs are described here].  If your opponent’s diff is less than or equal to -1, you win. If your opponent’s diff is worse than yours, you win. Otherwise you lose.

Finally, here’s the code that computes optimal bids:

# class Player:

    def OptimalBid(self, guess, opponent):
        self.MakeBeliefs(guess)
        calc = GainCalculator(self, opponent)
        bids, gains = calc.ExpectedGains()
        gain, bid = max(zip(gains, bids))
        return bid, gain

Given a guess and an opponent, OptimalBid computes
the posterior distribution, instantiates a GainCalculator,
computes expected gains for a range of bids and returns
the optimal bid and expected gain. Whew!




                                                               Figure 6.4




Figure 6.4 shows the results for both players, based on a scenario where Player 1’s best guess is $20,000 and Player 2’s best guess is $40,000.

For Player 1 the optimal bid is $21,000, yielding an expected return of almost $16,700. This is a case (which turns out to be unusual) where the optimal bid is actually higher than the contestant’s best guess.

For Player 2 the optimal bid is $31,500, yielding an expected return of almost $19,400. This is the more typical case where the optimal bid is less than the best guess.

Discussion

One of the most useful features of Bayesian estimation is that the result comes in the form of a posterior distribution. Classical estimation usually generates a single point estimate or a confidence interval, which is sufficient if estimation is the last step in the process, but if you want to use an estimate as an input to a subsequent analysis, point estimates and intervals are often not much help.

In this example, the Bayesian analysis yields a posterior distribution we can use to compute an optimal bid. The gain function is asymmetric and discontinuous (if you overbid, you lose), so it would be hard to solve this problem analytically. But it is relatively simple to do computationally.

Newcomers to Bayesian thinking are often tempted to summarize the posterior distribution by computing the mean or the maximum likelihood estimate. These summaries can be useful, but if that’s all you need, then you probably don’t need Bayesian methods in the first place.

Bayesian methods are most useful when you can carry the posterior distribution into the next step of the process to perform some kind of optimization, as we did in this chapter, or some kind of prediction, as we will see in the next chapter [which you can read here].

Monday, April 22, 2013

The Price is Right Problem

This article is an excerpt from Think Bayes, a book I am working on.  The entire current draft is available from http://thinkbayes.com.  I welcome comments and suggestions.

The Price is Right Problem

On November 1, 2007, contestants named Letia and Nathaniel appeared on The Price is Right, an American game show. They competed in a game called the Showcase, where the objective is to guess the price of a showcase of prizes. The contestant who comes closest to the actual price of the showcase, without going over, wins the prizes.

Nathaniel went first. His showcase included a dishwasher, a wine cabinet, a laptop computer, and a car. He bid $26,000.

Letia’s showcase included a pinball machine, a video arcade game, a pool table, and a cruise of the Bahamas. She bid $21,500.

The actual price of Nathaniel’s showcase was $25,347. His bid was too high, so he lost.

The actual price of Letia’s showcase was $21,578. She was only off by $78, so she won her showcase and, because her bid was off by less than $250, she also won Nathaniel’s showcase.

For a Bayesian thinker, this scenario suggests several questions:
  1. Before seeing the prizes, what prior beliefs should the contestant have about the price of the showcase?
  2. After seeing the prizes, how should the contestant update those prior beliefs?
  3. Based on the posterior distribution, what should the contestant bid?
The third question demonstrates a common use of Bayesian analysis: optimization. Given a posterior distribution, we can choose the bid that maximizes the contestant’s expected return.

This problem is inspired by an example in Cameron Davidson-Pilon’s book, Bayesian Methods for Hackers.


The prior 

To choose a prior distribution of prices, we can take advantage of data from previous episodes. Fortunately, fans of the show keep detailed records. When I corresponded with Mr. Davidson-Pilon about his book, he sent me data collected by Steve Gee at http://tpirsummaries.8m.com. It includes the price of each showcase from the 2011 and 2012 seasons, and the bids offered by the contestants.



Figure 6.1: Distribution of prices for showcases on The Price is Right, 2011-12.



Figure 6.1 shows the distribution of prices for these showcases. The most common value for both showcases is around $28,000, but the first showcase has a second mode near $50,000, and the second showcase is occasionally worth more than $70,000.

These distributions are based on actual data, but they have been smoothed by Gaussian kernel density estimation (KDE). So before we go on, I want to take a detour to talk about probability density functions and KDE.


Probability density functions

So far [in Think Bayes, that is] we have been working with probability mass functions, or PMFs. A PMF is a mapping from each possible value to its probability. In my implementation, a Pmf object provides a method named Prob that takes a value and returns a probability, also known as a “probability mass.”

A probability density function, or PDF, is the continuous version of a PMF, where the possible values make up a continuous range rather than a discrete set.

In mathematical notation, PDFs are usually written as functions; for example, here is the PDF of a Gaussian distribution with mean 0 and standard deviation 1:
f(x) = exp(−x2

To represent this PDF in Python, I could define a class like this:

class StandardGaussianPdf(object):

    def Density(self, x):
        return math.exp(-x**2)

Density takes a value, x, and returns the probability density evaluated at x.

A probability density is similar to a probability mass in one way: higher density indicates that a value is more likely.

But a density is not a probability. If you integrate a density over a continuous range, the result is a probability. But for the applications in this book we seldom have to do that.

In this book we primarily use probability densities as part of a Likelihood function. We will see an example soon.


Representing Pdfs

Before we get back to The Price is Right, I want to present a more general way to represent PDFs.
thinkbayes.py provides a class named Pdf that defines two functions, Density and MakePmf:

class Pdf(object):

    def Density(self, x):
        raise UnimplementedMethodException()

    def MakePmf(self, xs):
        pmf = Pmf()
        for x in xs:
            pmf.Set(x, self.Density(x))
        pmf.Normalize()
        return pmf



Pdf is an abstract type, which means that it defines
the interface a Pdf is supposed to have, but does not provide
a complete implementation. Specifically, Pdf provides
MakePmf but not Density.  [PMFs are described in Chapter 2 of Think Bayes.]

A concrete type is a class that extends an abstract parent class and provides an implementation of the missing methods.

For example, GaussianPdf extends Pdf and provides Density:

class GaussianPdf(Pdf):

    def __init__(self, mu, sigma):
        self.mu = mu
        self.sigma = sigma
        
    def Density(self, x):
        density = scipy.stats.norm.pdf(x, 
                                       loc=self.mu, 
                                       scale=self.sigma)
        return density



__init__ takes mu and sigma, which are
the mean and standard deviation of the distribution.
Density uses a function from scipy to evaluate the Gaussian PDF.

The Gaussian PDF is defined by a simple mathematical function, so it is easy to evaluate. And it is useful because many quantities in the real world have distributions that are approximately Gaussian.
But with real data, there is no guarantee that the PDF is Gaussian, or any other simple mathematical function. In that case we can use a data sample to estimate the PDF of the whole population.

For example, in The Price Is Right data, we have 313 prices for the first showcase. We can think of these values as a sample from the population of all possible showcase prices.

Near the middle of the distribution, we see the following values:

28800, 28868, 28941, 28957, 28958 

In the sample, no values appear between 28801 and 28867, but there is no reason to think that these values are impossible. Based on our background information, we would expect all values in this range to be equally likely. In other words, we expect the PDF to be reasonably smooth.

Kernel density estimation (KDE) is an algorithm that takes a sample of values and finds an appropriately-smooth PDF that fits the data. You can read about the details at http://en.wikipedia.org/wiki/Kernel_density_estimation.

scipy provides an implementation of KDE. thinkbayes provides a class called EstimatedPdf that extends Pdf and uses KDE:

class EstimatedPdf(Pdf):

    def __init__(self, sample):
        xs = numpy.array(sample, dtype=numpy.double)
        self.kde = scipy.stats.gaussian_kde(xs)

    def Density(self, x):
        return self.kde.evaluate(x)


__init__ takes a sample, converts it to a NumPy array,
and computes a kernel density estimate. The result is a
gaussian_kde object that provides an evaluate
method.

Density takes a value, calls gaussian_kde.evaluate, and returns the resulting density.
Finally, here’s an outline of the code I used to generate Figure 6.1:

    prices = ReadData()
    kde = thinkbayes.EstimatedPdf(prices)

    low, high = 0, 75000
    n = 101
    xs = numpy.linspace(low, high, n) 
    pmf = kde.MakePmf(xs)

    myplot.Pmf(pmf)


And now back to The Price is Right.


Modeling the contestants

The PDFs in Figure 6.1 estimate the distribution of possible prices for each showcase. If you were a contestant on the show, you could use this distribution to quantify your prior belief about the price of the showcases (before you see the prizes).

To update these priors, you have to answer these questions:
  1. What data should we consider and how should we quantify it?
  2. Can we compute a Likelihood function; that is, for each hypothetical value of price, can we compute the conditional likelihood of the data?
To answer these questions, I am going to model the contestant as a price-guessing instrument with known error characteristics. In other words, when the contestant sees the prizes, he or she guesses the price of each prize—ideally without taking into consideration the fact that the prize is part of a showcase—and adds up the prices. Let’s call this total guess.

Under this model, the question we have to answer is, “If the actual price is price, what is the likelihood that the contestant’s total estimate would be guess?”

Or if we define

    error = price - guess

then we could ask, “What is the likelihood
that the contestant’s estimate is off by error?”
To answer this question, we can use the historical data again. Figure 6.2 shows the cumulative distribution of diff, the difference between the contestant’s bid and the actual price of the showcase.
The definition of diff is

    diff = price - bid

When diff is negative, the bid is too high. As an
aside, we can use this CDF to compute the probability that the
contestants overbid: the first contestant overbids 25% of the
time; the second contestant overbids 29% of the time.



Figure 6.2: Cumulative distribution (CDF) of the difference between the contestant’s bid and the actual price.




We can also use this distribution to estimate the reliability of the contestants’ guesses. This step is a little tricky because we don’t actually know the contestant’s guesses; we only know what they bid.
In Figure 6.2 we can see that the bids are biased; that is, they are more likely to be too low than too high. And that makes sense, given the rules of the game.

So we’ll have to make some assumptions. Specifically, I assume that the distribution of error is Gaussian with mean 0 and the same variance as diff.

The Player class implements this model:

class Player(object):

    def __init__(self, price, bid, diff):
        self.price = price
        self.bid = bid
        self.diff = diff

        self.pdf_price = thinkbayes.EstimatedPdf(price)
        self.cdf_diff = thinkbayes.MakeCdfFromList(diff)

        mu = 0
        sigma = numpy.std(self.diff)
        self.pdf_error = thinkbayes.GaussianPdf(mu, sigma)

price is a sequence of showcase prices, bid is a
sequence of bids, and diff is a sequence of diffs, where
again diff = price - bid.

pdf_price is the smoothed PDF of prices, estimated by KDE. cdf_diff is the cumulative distribution of diff, which we saw in Figure 6.2. And pdf_error is the PDF that characterizes the distribution of errors; where error = price - guess.

Again, we use the variance of diff to estimate the variance of error. But contestant’s bids are sometimes strategic; for example, if Player 2 thinks that Player 1 has overbid, Player 2 might make a very low bid. In that case diff does not reflect error. If this strategy is common, the observed variance in diff might overestimate the variance in error. Nevertheless, I think it is a reasonable modeling decision.

As an alternative, someone preparing to appear on the show could estimate their own distribution of error by watching previous shows and recording their guesses and the actual prices.


Likelihood

Now we are ready to write the likelihood function. As usual, I define a new class that extends thinkbayes.Suite:

class Price(thinkbayes.Suite):

    def __init__(self, pmf, player):
        thinkbayes.Suite.__init__(self)

        for price, prob in pmf.Items():
            self.Set(price, prob)

        self.player = player

pmf represents the prior distribution. The for
loop copies the values and probabilities from pmf into
the new Suite.

player is a Player object as described in the previous section.  And here’s Likelihood:

    def Likelihood(self, hypo, data):
        price = hypo
        guess = data

        error = price - guess
        like = self.player.ErrorDensity(error)

        return like

hypo is the hypothetical price of the showcase. data
is the contestant’s best guess at the price. error is
the difference, and like is the likelihood of the data,
given the hypothesis.

ErrorDensity is defined in Player:
# class Player:

    def ErrorDensity(self, error):
        return self.pdf_error.Density(error)

ErrorDensity works by evaluating pdf_error at
the given value of error.

The result is a probability density, which means we can’t treat it as a probability. But remember that Likelihood does not really need to compute a probability; it only has to compute something proportional to a probability. As long as the constant of proportionality is the same for all likelihoods, it gets cancelled out when we normalize the posterior distribution.

And therefore, a probability density is a perfectly good likelihood.


Update

Player provides a method that takes the contestant’s guess and computes the posterior distribution:

    def MakeBeliefs(self, guess):
        pmf = self.PmfPrice()
        self.prior = Price(pmf, self, name='prior')
        self.posterior = self.prior.Copy(name='posterior')
        self.posterior.Update(guess)

PmfPrice evaluates pdf_price at an equally-spaced
series of values:

    def PmfPrice(self):
        return self.pdf_price.MakePmf(self.price_xs)

The result is a new Pmf object, which we use to construct
the prior. To construct the posterior, we make a copy of the
prior and then invoke Update, which invokes Likelihood
for each hypothesis, multiplies the priors by the likelihoods,
and then renormalizes.

So let’s get back to the original scenario. Suppose you are Player 1 and when you see your showcase, your best guess is that the total price of the prizes is $20,000.

The following code constructs and plots your prior and posterior beliefs about the actual price:

    player1.MakeBeliefs(20000)
    myplot.Pmf(player1.prior)
    myplot.Pmf(player2.prior)


Figure 6.3: Prior and posterior distributions for Player 1, based on a best guess of $20,000.


Figure 6.3 shows the results. The value of your guess is on the low end of the prior range, so the posterior is shifted to the left. The mean of the posterior is $25,096; the most likely value is $24,000.
On one level, this result makes sense. The most likely value in the prior is $27,750. Your best guess is $20,000. And the most likely value in the posterior is about half way in between.

On another level, you might find this result bizarre, because it suggests that if you think the price is $20,000, then you should believe the price is $24,000.

To resolve this apparent paradox, remember that you are combining two sources of information, historical data about past showcases and guesses about the prizes you see.

We are treating the historical data as the prior and updating it based on your guesses, but we could equivalently use your guess as a prior and update it based on historical data.

If you think of it that way, maybe it is less surprising that the most likely value in the posterior is not your original guess.

In the next installment, we'll use the posterior distribution to compute the optimal bid for each player.

Thursday, April 11, 2013

The price is right

On the reddit statistics forum, I recently posted a link to my tutorial on Bayesian statistics.  One of my fellow redditors drew my attention to this article, which uses pymc to do a Bayesian analysis of The Price is Right.  He or she asked how I would solve this problem using the framework in Think Bayes.  So, here goes.

First, we have to define a Suite of hypotheses.  In this example, each hypothesis represents a belief about the total price of the showcase.  We are told, based on data from previous shows, that a reasonable prior distribution is normal with mean 35000 dollars and standard deviation 7500.

So here's the code that creates the Suite:

class Price(thinkbayes.Suite):

    def __init__(self, error_sigma):
        """Constructs the suite.

        error_sigma: standard deviation of the distribution of error
        """
        thinkbayes.Suite.__init__(self)
        pmf = thinkbayes.MakeGaussianPmf(35000, 7500, num_sigmas=4)

        # copy items from pmf to self
        for val, prob in pmf.Items():
            self.Set(val, prob)

        # store error_sigma for use in Likelihood
        self.error_sigma = error_sigma

MakeGaussianPmf makes a Pmf (probability mass function) that approximates a normal distribution.  It truncates the range of the distribution 4 sigmas in each direction from the mean.

I'll explain error_sigma in a minute.

Now that we have a suite of hypotheses, we think about how to represent the data.  In this case, the "data" is my best guess about the total value of the showcase, which we can think of as a measurement produced by a somewhat unreliable instrument, my brain.

According to the problem statement, my best guess is 3000 for the price of the snowmobile, 12000 for the price of the trip, so 15000 for the total price.

Now we need a Likelihood function that takes a hypothetical price for the showcase, and my best guess, and returns the Likelihood of the data given the hypothesis.  That is, if the actual price of the showcase is X, what is the probability that I would guess 12000?

To answer that question, we need some information about how good I am at guessing prices, and that's where error_sigma comes in.

We are told that my uncertainty about the price of the snowmobile can be captured by a normal distribution with sigma=500.  And my uncertainty about the price of the trip is normal with sigma=3000.  So let's compute the standard deviation of my total error:

    error_snowmobile = 500
    error_trip = 3000
    error_total = math.sqrt(error_snowmobile**2 + error_trip**2)

Now we can create the Suite

    suite = Price(error_total)

and update it with my guess

    my_guess = 15000
    suite.Update(my_guess)

When we invoke Update, it invokes Likelihood once for each hypothetical showcase price.  Here's the Likelihood function:

    def Likelihood(self, hypo, data):
        """Computes the likelihood of the data under the hypothesis.

        hypo: actual price
        data: my guess
        """
        actual_price = hypo
        my_guess = data

        error = my_guess - actual_price
        like = thinkbayes.EvalGaussianPdf(
            mu=0, 
            sigma=self.error_sigma,
            x=error)

        return like

x is the error; that is, how much my guess is off by.  like is the likelihood that I would be off by that much, computing by evaluating the density function of the Gaussian with sigma=error_sigma.

And we're done.  Let's see what the results look like.


The mean of the posterior distribution is 17800, substantially less than the prior mean (35000).  It is also substantially less than the result reported in the original article, near 28000.  So we are left with a suite of three hypotheses:

1) There is a mistake in my implementation.
2) There is a mistake in the other author's implementation.
3) We are actually making subtly different modeling assumptions.

Option #3 is possible because I am not sure I understand the model in the original article.  The author describes my beliefs about the snowmobile and the trip as "priors", which suggests that they are going to get updated.  In contrast, I am treating my guess about the prices as data (that is, a summary of what I learned by seeing the contents of the showcase), but I am also modeling myself as a measurement instrument with a characteristic distribution of errors.

Under my interpretation, the posterior shown above makes sense.  For example, if my guess is 15000, and the standard deviation of my guesses is 3050, then it is very unlikely that I am off by 4 standard deviations, so the upper bound of the posterior should be around 15000 + 4 * 3050 = 27200.  That makes the mean reported in the original article (28000) seem too high.

But maybe I am not interpreting the statement of the problem (or the model) as intended.  I will check in with my correspondent and update this article when we have an explanation!

UPDATE April 18, 2013:  I exchanged a few emails with the author of the original article, Cameron Davidson-Pilon.  He found a bug in his code that explains at least part of the difference between his results and mine.  So I think he is planning to update his article and the book he is working on.

He also sent me some data on the value of recent showcases on The Price is Right and the bids offered by the contestants.  The data were collected by Steve Gee and posted at The Price is Right Stats.

I have written code that uses this data to form the prior distribution of prices, and also to estimate the distribution of errors for the contestants.  And I am writing it up as a chapter in Think Bayes.  I'll post the new chapter here when it is done!

UPDATE April 22, 2013: I have added a chapter to Think Bayes, and I am publishing it as a two-part series, starting here.

Tuesday, April 9, 2013

Freshman hordes regress to the mean

More nones, no nuns

For several years I have been following one of the most under-reported stories of the decade: the fraction of college freshmen who report no religious preference has tripled since 1985, from 8% to 24%, and the trend is accelerating.

Two years ago I wrote Freshman hordes more godless than ever; last year I updated it with Freshman hordes even more godless.  Each year, the number of students with no religious preference increased, and the number attending religious services decreased.

In last year's installment, I made the bold prediction that the trend would continue, and that the students starting college in 2012 would again, be the most godless ever.  It turns out I was wrong: attendance went up slightly, and the fraction of "Nones" dropped slightly, in both cases reverting toward long-term trends.

My analysis is based on survey results from the Cooperative Institutional Research Program (CIRP) of the Higher Education Research Insitute (HERI).  In 2012, more than 190,000 students at 283 colleges and universities completed the CIRP Freshman Survey, which includes questions about students’ backgrounds, activities, and attitudes.

In one question, students select their “current religious preference,” from a choice of seventeen common religions, “Other religion,” or “None.”

Another question asks students how often they “attended a religious service” in the last year. The choices are “Frequently,” “Occasionally,” and “Not at all.” Students are instructed to select “Occasionally” if they attended one or more times.

The following figure shows the fraction of Nones over more than 40 years of the survey:


The blue line shows actual data through 2011; the red line shows a quadratic fit to the data.  The dark gray region shows a 90% confidence interval, taking into account sampling error, so it reflects uncertainty about the parameters of the fit.

The light gray region shows a 90% confidence interval taking into account both sampling error and residual error.  So it reflects total uncertainty about the predicted value, including uncertainty due to random variation from year to year.

We expect the new data point from 2012, shown as a blue square, to fall within the light gray interval, and it does.  In fact, at 23.8% it falls almost exactly on the fitted curve.

Here is the corresponding plot for attendance at religious services:

Again, the new data point for 2012, 26.8%,  falls comfortably in the predicted range.  Don't listen to Nate Silver; prediction is easy :)

Predictions for 2013

Using the new 2012 data, we can generate predictions for 2013.  Here is the revised plot for "Nones":
The prediction for next year is that the fraction of Nones will hit a new all-time high at 25% (up from 23.8%).

And here is the prediction for "No attendance":

The prediction for 2013 is a small decrease to 26.6% (from 26.8%).  I'll be back next year to check on these predictions.


Other updates

1) This year the survey repeated two questions from 2010, asking students if they consider themselves "Born again Christian" or "Evangelical".  The fraction reporting "Born again" dropped from 22.8% to 20.2%.  The fraction who consider themselves Evangelical dropped from 8.9% to 8.5%.  But it's too early to declare a trend.

2) As always, more males than females report no religious preference.  The gender gap increased this year, but still falls in the predicted range, as shown in the following plot:
Evidence that the gender gap is increasing is strong.  The p-value of the slope of the fitted curve is less than 10e-5.


Data Source


The American Freshman: National Norms Fall 2012
Pryor, J.H., Eagan, K., Palucki Blake, L., Hurtado, S., Berdan, J., Case, M.H.
ISBN: 978-1-878477-22-4     90 pages.
Jan 2013


This and all previous reports are available from the HERI publications page.