Of course, there's a Wikipedia page about it, which I'll borrow to provide the background:
"Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice during the experiment, Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening.
A fair coin will be tossed to determine which experimental procedure to undertake: if the coin comes up heads, Beauty will be awakened and interviewed on Monday only. If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday. In either case, she will be awakened on Wednesday without interview and the experiment ends.
Any time Sleeping Beauty is awakened and interviewed, she is asked, 'What is your belief now for the proposition that the coin landed heads?'"The problem is discussed at length on this CrossValidated thread. As the person who posted the question explains, there are two common reactions to this problem:
The Halfer position. Simple! The coin is fair--and SB knows it--so she should believe there's a one-half chance of heads.
The Thirder position. Were this experiment to be repeated many times, then the coin will be heads only one third of the time SB is awakened. Her probability for heads will be one third.The thirder position is correct, and I think the argument based on long-run averages is the most persuasive. From Wikipedia:
Suppose this experiment were repeated 1,000 times. It is expected that there would be 500 heads and 500 tails. So Beauty would be awoken 500 times after heads on Monday, 500 times after tails on Monday, and 500 times after tails on Tuesday. In other words, only in one-third of the cases would heads precede her awakening. This long-run expectation should give the same expectations for the one trial, so P(Heads) = 1/3.But here's the difficulty (from CrossValidated):
As a stark raving Bayesian, I find this mildly disturbing. Is this an example where frequentism gets it right and Bayesianism gets it wrong? One of the responses on reddit pursues the same thought:
I wonder where exactly in Bayes' rule does the formula "fail". It seems like P(wake|H) = P(wake|T) = 1, and P(H) = P(T) = 1/2, leading to the P(H|wake) = 1/2 conclusion.I have come to a resolution of this problem that works, I think, but it made me realize the following subtle point: even if two things are inevitable, that doesn't make them equally likely.
Is it possible to get 1/3 using Bayes' rule?
In the previous calculation, the priors are correct: P(H) = P(T) = 1/2
It's the likelihoods that are wrong. The datum is "SB wakes up". This event happens once if the coin is heads and twice if it is tails, so the likelihood ratio P(wake|H) / P(wake|T) = 1/2
If you plug that into Bayes's theorem, you get the correct answer, 1/3.
This is an example where the odds form of Bayes's theorem is less error prone: the prior odds are 1:1. The likelihood ratio is 1:2, so the posterior odds are 1:2. By thinking in terms of likelihood ratio, rather than conditional probability, we avoid the pitfall.
If this example is still making your head hurt, here's an analogy that might help: suppose you live near a train station, and every morning you hear one express train and two local trains go past. The probability of hearing an express train is 1, and the probability of hearing a local train is 1. Nevertheless, the likelihood ratio is 1:2, and if you hear a train, the probability is only 1/3 that it is the express.
[UPDATE 11 November 2015] Peter Norvig writes about the Sleeping Beauty problem in this IPython notebook. He agrees that the correct answer is 1/3:
The "halfers" argue that before Sleeping Beauty goes to sleep, her unconditional probability for heads should be 1/2. When she is interviewed, she doesn't know anything more than before she went to sleep, so nothing has changed, so the probability of heads should still be 1/2. I find two flaws with this argument. First, if you want to convince me, show me a sample space; don't just make philosophical arguments. (Although a philosophical argument can be employed to help you define the right sample space.) Second, while I agree that before she goes to sleep, Beauty's unconditional probability for heads should be 1/2, I would say that both before she goes to sleep and when she is awakened, her conditional probability of heads given that she is being interviewed should be 1/3, as shown by the sample space.
[UPDATE June 15, 2015] In the comments below, you’ll see an exchange between me and a reader named James. It took me a few tries to understand his question, so I’ll take the liberty of editing the conversation to make it clearer (and to make me seem a little quicker on the uptake):
James: I'd be interested in your reaction to the following extension. Before going to sleep on Sunday, Sleeping Beauty makes a bet at odds of 3:2 that the coin will come down heads. (This is favourable for her when the probability of heads is 1/2, and unfavourable when the probability of heads is 1/3). She is told that whenever she is woken up, she will be offered the opportunity to cancel any outstanding bets. Later she finds herself woken up, and asked whether she wants to cancel any outstanding bets. Should she say yes or no? (Let's say she doesn't have access to any external randomness to help her choose). Is her best answer compatible with a "belief of 1/3 that the coin is showing heads"?
Allen: If the bet is only resolved once (on Wednesday), then SB should accept the bet (and not cancel it) because she is effectively betting on a coin toss with favorable odds, and the whole sleeping-waking scenario is irrelevant.
James: Right, the bet is only resolved once. So, we agree that she should not cancel. But isn't there something odd? Put yourself in SB's position when you are woken up. You say that you have a "belief of 1/3 in the proposition that the coin is heads". The bet is unfavourable to you if the probability of heads is 1/3. And yet you don't cancel it. That suggests one sense in which you do NOT have a belief of 1/3 after all.
Allen: Ah, now I see why this is such an interesting problem. You are right that I seem to have SB keeping a bet that is inconsistent with her beliefs. But SB is not obligated to bet based on her current beliefs. If she knows that more information is coming in the future, she can compute a posterior based on that future information and bet accordingly.
Each time she wakes up, she should believe that she is more likely to be in the Tails scenario -- that is, that P(H) = 1/3 -- but she also knows that more information is coming her way.
Specifically, she knows that when she wakes up on Wednesday, and is told that it is Wednesday and the experiment is over, she will update her beliefs and conclude that the probability of Heads is 50% and the bet is favorable.
So when she wakes up on Monday or Tuesday and has the option to cancel the bet, she could think: "Based on my current beliefs, this bet is unfavorable, but I know that before the bet is resolved I will get more information that makes the bet favorable. So I will take that future information into account now and keep the bet (decline to cancel)."
I think the weirdness here is not in her beliefs but in the unusual scenario where she knows that she will get more information in the future. The Bayesian formulation of the problem tells you what she should believe after performing each update, but [the rest of the sentence deleted because I don’t think it’s quite right any more].
Upon further reflection, I think there is a general rule here:
When you evaluate a bet, you should evaluate it relative to what you will believe when the bet is resolved, which is not necessarily what you believe now. I’m going to call this the Fundamental Theorem of Betting, because it reminds me of Sklansky’s Fundamental Theorem of Poker, which says that the correct decision in a poker game is the decision you would make if all players’ cards were visible.
Under normal circumstances, we don’t know what we will believe in the future, so we almost always use our current beliefs as a heuristic for, or maybe estimate of, our future beliefs. Sleeping Beauty’s situation is unusual because she knows that more information is coming in the future, and she knows what the information will be!
To see how this theorem holds up, let me run the SB scenario and see if we can make sense of Sleeping Beauty’s beliefs and betting strategy:
Experimenter: Ok, SB, it’s Sunday night. After you go to sleep, we’re going to flip this fair coin. What do you believe is the probability that it will come up heads, P(H)?
Sleeping Beauty: I think P(H) is ½.
Ex: Ok. In that case, I wonder if you would be interested in a wager. If you bet on heads and win, I’ll pay 3:2, so if you bet $100, you will either win $150 or lose $100. Since you think P(H) is ½, this bet is in your favor. Do you want to accept it?
SB: Sure, why not?
Ex: Ok, on Wednesday I’ll tell you the outcome of the flip and we’ll settle the bet. Good night.
Ex: Good morning!
SB: Hello. Is it Wednesday yet?
Ex: No, it’s not Wednesday, but that’s all I can tell you. At this point, what do you think is the probability that I flipped heads?
SB: Well, my prior was P(H) = ½. I’ve just observed an event (D = waking up before Wednesday) that is twice as likely under the tails scenario, so I’ll update my beliefs and conclude that P(H|D) = ⅓.
Ex: Interesting. Well, if the probability of heads is only ⅓, the bet we made Sunday night is no longer in your favor. Would you like to call it off?
SB: No, thanks.
Ex: But wait, doesn’t that mean that you are being inconsistent? You believe that the probability of heads is ⅓, but you are betting as if it were ½.
SB: On the contrary, my betting is consistent with my beliefs. The bet won’t be settled until Wednesday, so my current beliefs are not important. What matters is what I will believe when the bet is settled.
Ex: I suppose that makes sense. But do you mean to say that you know what you will believe on Wednesday?
SB: Normally I wouldn’t, but this scenario seems to be an unusual case. Not only do I know that I will get more information tomorrow; I even know what it will be.
Ex: How’s that?
SB: When you give me the amnesia drug, I will forget about the update I just made and revert to my prior. Then when I wake up on Wednesday, I will observe an event (E = waking up on Wednesday) that is equally likely under the heads and tails scenarios, so my posterior will equal my prior, I will believe that P(H|E) is ½, and I will conclude that the bet is in my favor.
Ex: So just before I tell you the outcome of the bet, you will believe that the probability of heads is ½?
Ex: Well, if you know what information is coming in the future, why don’t you do the update now, and start believing that the probability of heads is ½?
SB: Well, I can compute P(H|E) now if you want. It’s ½ -- always has been and always will be. But that’s not what I should believe now, because I have only seen D, and not E yet.
Ex: So right now, do you think you are going to win the bet?
SB: Probably not. If I’m losing, you’ll ask me that question twice. But if I’m winning, you’ll only ask once. So ⅔ of the time you ask that question, I’m losing.
Ex: So you think you are probably losing, but you still want to keep the bet? That seems crazy.
SB: Maybe, but even so, my beliefs are based on the correct analysis of my situation, and my decision is consistent with my beliefs.
Ex: I’ll need to think about that. Well, good night.