tag:blogger.com,1999:blog-6894866515532737257.comments2014-08-21T06:43:39.037-07:00Probably Overthinking ItAllen Downeyhttps://plus.google.com/111942648516576371054noreply@blogger.comBlogger393125tag:blogger.com,1999:blog-6894866515532737257.post-45516013698174626722014-08-21T06:43:39.037-07:002014-08-21T06:43:39.037-07:00Ok I see, total probability..., thanks!Ok I see, total probability..., thanks!Henrihttp://www.blogger.com/profile/00434803886040541009noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-11022125910069703392014-08-19T11:13:41.387-07:002014-08-19T11:13:41.387-07:00I plugged the previous values into Bayes's the...I plugged the previous values into Bayes's theorem:<br /><br />P(A|E) = P(A) P(E|A) / P(E)<br /><br />Where the denominator P(E) is<br /><br />P(A) P(E|A) + P(B) P(E|B)<br /><br />All clear?Allen Downeyhttp://www.blogger.com/profile/01633071333405221858noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-18779576131694968062014-08-16T23:50:06.950-07:002014-08-16T23:50:06.950-07:00Hi Allen.
In 3), you end up with:
P(A|E) = 8/54 ~...Hi Allen.<br /><br />In 3), you end up with:<br />P(A|E) = 8/54 ~ 0.15.<br /><br />How do you determine that P(E) = 0.54 ?<br />Henrihttp://www.blogger.com/profile/00434803886040541009noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-32602316813430833482014-08-14T07:00:04.545-07:002014-08-14T07:00:04.545-07:00That's cool. Do you have your R code on GitHu...That's cool. Do you have your R code on GitHub or some other public repo? I think others would like to see it. Let me know and I will add a link to it.Allen Downeyhttp://www.blogger.com/profile/01633071333405221858noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-87632892469646090692014-08-07T13:10:10.487-07:002014-08-07T13:10:10.487-07:00I was thinking along those lines. So, essentially,...I was thinking along those lines. So, essentially, we say (in R, sorry)<br />x <- seq(from=14000,to=64400,by=700)<br />Like <- dnorm(x-20000,mean=0,sd=sd(SC1Diff))<br />Prior <- approx(SC1PDF$x, SC1PDF$y, x)<br />Post <- Prior$y*Like<br />Post <- Post /sum(Post)<br />where SC1PDF is the kernel density approximation to the data sets. I do indeed match your chart in the book. Thanks! Love the book, but I'm translating it into R as I go, rather than using the Python framework, so it's just a bit tougher. Appreciate it.Reuben Gannhttp://www.blogger.com/profile/14813246182350894222noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-55135986885358915062014-08-07T12:54:40.606-07:002014-08-07T12:54:40.606-07:00Hi Reuben,
P(20000 | H_x) is the probability that...Hi Reuben,<br /><br />P(20000 | H_x) is the probability that you guess 20000, given that the actual value is x, so that's the same as the probability that diff is (x-20000). We can't really compute that probability, but we can compute a density proportional to that probability by evaluating the PDF of diff at (x-20000).<br /><br />Does that make sense?Allen Downeyhttp://www.blogger.com/profile/01633071333405221858noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-76739150248983838112014-08-07T12:47:31.294-07:002014-08-07T12:47:31.294-07:00My understanding is getting lost in the Python. Yo...My understanding is getting lost in the Python. You are trying to compute the posterior for "E = my guess is 20000", where<br /><br />P(H_x | 20000) = P(20000 | H_x) P(H_x) / P(20000),<br /><br />where x=0..75000 and H_x means the price is x, correct? You assume that the distribution of errors is proportional to e^{-x^2/2 sigma^2}, where sigma is the standard deviation of diff (which is $6899.91 for Showcase 1). But what is P(20000 | H_x)?Reuben Gannhttp://www.blogger.com/profile/14813246182350894222noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-62038576570127154262014-08-07T12:44:31.220-07:002014-08-07T12:44:31.220-07:00Very interesting. Thanks for this comment!Very interesting. Thanks for this comment!Allen Downeyhttp://www.blogger.com/profile/01633071333405221858noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-59091846791029176262014-08-05T15:28:12.997-07:002014-08-05T15:28:12.997-07:00Greetings Professor. Thanks you for your kind resp...Greetings Professor. Thanks you for your kind responses on a different thread. I couldn't help but comment on this one, too.<br /><br />I am mostly interested in statistics coming at it from the point of view of investing in the stock market. There are many factors that a investor could choose from when selecting a share to buy, and of course one must try to winnow these factors out.<br /><br />Although "Efficient Market" theory has been around for some time, there has been emerging interest in psychology on people's investing decisions. The dominant faction are what we might call the "behaviouralists". I call these the "glass half empty" guys, as there central thesis is that human beings are inherently irrational. So they are subject to such things as "hindsight bias", "confirmation bias", "base rate fallacies", and many many more. Particularly relevant here, though, is "conservatism" (they underweigh new sample evidence when compared to Bayesian belief-revision) and conflating correlation with causation. So, if you believe this school of thought, then human beings are big bags of irrationality.<br /><br />On the other hand, psychologist Gigerenzer has studied the use of "bounded rationatily and heuristics in decision making". His work seems almost diammetrically opposed to the behaviorilists. He is a "glass half full" guy, and he makes a good demonstration of how humans are actually capable of making good decisions under uncertainty. He showed that under some circumstances, simple heuristics can often beat statistical methods, often because the latter tends to over-fit to training data.<br /><br />Your post neatly highlights the contrasts between the two camps. Your article neatly demonstrates how an apparent irrationality can actually be rational.<br /><br />In a way, it's quite remarkable when you think about it: Mother Nature is trying to endow humans with optimal survival decision-making skills for a lack of carefully tabulated statistics tables. I wonder just how much of irrational human behaviour will later be found out to be best-fit adaptationally to our environment.<br /><br />I hope this has been interesting.Max Powerhttp://www.blogger.com/profile/04470463426170671630noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-56044562379207446382014-08-05T13:38:02.435-07:002014-08-05T13:38:02.435-07:00Don't worry about overposting, but as some poi...Don't worry about overposting, but as some point I might have to stop overreplying :)<br /><br />Reading between the lines, I think you are coming face to face with one of the central issues of Bayesian inference, which is how to interpret probabilities, and especially the prior probability.<br /><br />In this case, P(H) is the prior probability that I am in the right class. If I chose the classroom at random, P(H) would be low. But I am basing my solution on the assumption that I did not choose the classroom at random, but rather tried to go to the right place. And based on my prior experience with navigating unfamiliar campuses, I estimate that my chance of being in the right place is about 90%.<br /><br />In frequentist terms, you could say that the relevant sample space is "all the times I've tried to find the right room", rather than "all the classrooms on campus."<br /><br />In (subjective) Bayesian terms, you would say that 90% is my subjective degree of belief that I am in the right place, based on relevant background information.<br /><br />But I would not say (as I think you did) that I am making a claim about the university, or that my Downeyian university is very different from a real university. My analysis is based on a model and the simplifications that come with it, but I don't think the model is as weird as you suggest.<br /><br />Thanks for this line of questions; I think it is productive.Allen Downeyhttp://www.blogger.com/profile/01633071333405221858noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-11267127084434500202014-08-05T13:25:53.857-07:002014-08-05T13:25:53.857-07:00Many thanks for your replies, professor. I hope I&...Many thanks for your replies, professor. I hope I'm not overposting.<br /><br />Would it be fair to say that my misconception of the problem is that I'm taking Olin University as the population; whereas I should be considering the population not as the Olin University, but a "Downeyian University". <br /><br />The Downeyian University is a special university ... one in which you have a 90% chance of turning up to the right class ... and not the Olin University, in which you would have only a very small chance of turning up at the right class if you just chose one at random.<br /><br />And this Downeyian University is a very strange University indeed ... because although you can specify what classes you are likely to turn up correctly for (it's the ones that you teach), you don't know what classes constitute the incorrect choices. They will be some proper subset of the entire Olin University, but we don't know what. The only thing we can say about it is that there is the same proportion of males as females. That would presumably be an assumption.<br /><br />But there's more! Although you're assuming that proportion of incorrectly chosen, you might be wrong. In fact, it's even plausible. How? Well, suppose you mostly gives lectures in the science faculty. Suppose that the science students are 90% male - not 50% male - and exactly the same proportion as your own class. What happens then, of course, is that the presence of females would actually give you no information.<br /><br />And maybe the situation is even worse than that! Maybe the actual "Downeyian" population contains more than 90% males, but that the males have a disproportionately larger distaste for mathematics and programming. Maybe they prefer engineering, or something. In that case, your intuition would have to be entirely flipped around ... the presence of females would be a positive indication that you're actually in the right class.<br /><br />Or perhaps I've got the wrong end of the stick. But I think that what I'm saying makes sense.<br /><br />Who would have thought statistics could be so much fun? ;)Max Powerhttp://www.blogger.com/profile/04470463426170671630noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-35677542581454940972014-08-05T12:17:37.645-07:002014-08-05T12:17:37.645-07:00Ah, now I see the problem! My previous reply was ...Ah, now I see the problem! My previous reply was wrong, but the numbers in the article are correct (but explained badly).<br /><br />As you said, the denominator P(F) should be P(F|H) P(H) + P(F|-H) P(-H), which is 0.14, not 0.5, and that yields P(H|F) = 0.64.<br /><br />In your first message, you objected to this denominator because you said it assumes that my class makes up 90% of the population of students. I think that's not right -- rather it takes into account that I am initially 90% sure that I am in the right class. But the term P(F|H) = 0.5 assumes (as you suggest) that my class is an insignificant part of the student population.<br /><br />Sorry for my confusion, and thanks for pointing this out. When I have a chance, I will edit the article to clarify.Allen Downeyhttp://www.blogger.com/profile/01633071333405221858noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-34299538050538101042014-08-05T11:43:10.780-07:002014-08-05T11:43:10.780-07:00But if you take P(H) = 0.9, P(F|H) = 0.1, P(F) = 0...But if you take P(H) = 0.9, P(F|H) = 0.1, P(F) = 0.5 and plug it into the formula, you get<br />P(H|F) = P(H) * P(F|H) / P(F)<br />= 0.9 * 0.1 / 0.5 = 0.18<br />which is not the answer of 0.64 that you gave in your post.Max Powerhttp://www.blogger.com/profile/04470463426170671630noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-45907208063175877762014-08-05T11:31:27.135-07:002014-08-05T11:31:27.135-07:00I'm not positive I understand where you see a ...I'm not positive I understand where you see a problem, but I think I agree with you. P(F) is the probability of a female student regardless of H, so it should be the overall fraction of female students at the university, probably close to 0.5. That's what I used in my calculations.Allen Downeyhttp://www.blogger.com/profile/01633071333405221858noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-357144002469479162014-08-05T11:26:36.133-07:002014-08-05T11:26:36.133-07:00There seems something intuitively wrong with your ...There seems something intuitively wrong with your calculation of P(F). You are saying:<br />P(F) = P(F|H) P(H) + P(F|-H) P(-H)<br />= 0.1 * 0.9 + 0.5 * 0.1<br />= 0.14<br />and using P(H) = 0.9, P(F|H) = 0.1 to obtain<br />P(H|F) = P(H) * P(F|H) / P(F)<br />= 0.9 * 0.1 / 0.14 = 0.64<br /><br />But here's the problem: the university is large, and your class is relatively small. So, given that P(F) is the likelhood of the data INDEPENDENT of H, you would expect that the overall university ratio of females would swamp out any skewing that your classes might introduce. In other words, I would expect P(F) = 0.5 (approx), not 0.14.<br /><br />The problem appears to be that you are assuming that the students in your class constitute 90% of the population of students (i.e. P(H)) of the university, whereas in fact they are likely to constitute only a miniscule proportion. That's why there's a skewing.<br /><br />Comments?Max Powerhttp://www.blogger.com/profile/04470463426170671630noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-59880230487550199632014-08-04T13:50:03.505-07:002014-08-04T13:50:03.505-07:00Just an extra note: I'm learning a bit of Stan...Just an extra note: I'm learning a bit of Stan and I coded the exact same model as in BUGS. The result after 100k iterations was 72.11. So, another confirmation :-) By some mysterious reason BUGS is having problems dealing with this likelihood function.João Netohttp://www.blogger.com/profile/05560718055133816500noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-34903015756005710772014-07-25T13:46:08.499-07:002014-07-25T13:46:08.499-07:00Great, thanks Allen. I see your odds ratio but wh...Great, thanks Allen. I see your odds ratio but what was your overall pseudo r-squared? Larry Featherstonhttp://www.blogger.com/profile/16772361721940706273noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-90191702517436045032014-07-25T10:56:30.285-07:002014-07-25T10:56:30.285-07:00Hi Larry,
All of my code and the data are in this...Hi Larry,<br /><br />All of my code and the data are in this repository:<br /><br />https://github.com/AllenDowney/internet-religion<br /><br />You should be able to check it out and replicate my results easily, especially if you have a Python environment set up.<br /><br />The paper, with details about the methodology and the variables I used, is here:<br /><br />http://arxiv.org/abs/1403.5534<br /><br />Let me know if you find anything interesting!<br /><br />AllenAllen Downeyhttp://www.blogger.com/profile/01633071333405221858noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-62477961359659649502014-07-25T10:49:12.514-07:002014-07-25T10:49:12.514-07:00Hi Allen,
I am intrigued by your research on int...Hi Allen, <br /><br />I am intrigued by your research on internet usage and religion and have a few questions. In the 2012 General Social Survey dataset, I am familiar with several variables related to internet usage. Some of these variables are binomial and others are interval-scaled such as the WWWHR variable. <br /><br />To help me understand your research, can you provide the model specification you used for your analysis? I would like to replicate the results and would like to see your beta weights and pseudo R-square values which you are basing your interpretation? You also indicated that you utilized logistic regression to perform your analysis. Could you provide a little more detail about your statistical methodology? Did you perform a multinomial logistics regression or ordinal regression? <br /><br />Thanks so much,Larry Featherstonhttp://www.blogger.com/profile/16772361721940706273noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-30767234136153558712014-07-23T08:10:28.624-07:002014-07-23T08:10:28.624-07:00Nice!Nice!Allen Downeyhttp://www.blogger.com/profile/01633071333405221858noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-38985412746039363392014-07-23T07:15:56.542-07:002014-07-23T07:15:56.542-07:00Well, not 'where' but 'how' :-) I ...Well, not 'where' but 'how' :-) I used Mathematica to make the integration and R to find the mode. The details are at the end of the webpage I made.<br /><br />Cheers,João Netohttp://www.blogger.com/profile/05560718055133816500noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-82580042191608181412014-07-23T07:08:09.613-07:002014-07-23T07:08:09.613-07:00Huh. Well, let me know if you figure it out.
Whe...Huh. Well, let me know if you figure it out.<br /><br />Where did you find the analytic solution?Allen Downeyhttp://www.blogger.com/profile/01633071333405221858noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-76711349270150731902014-07-19T10:55:22.346-07:002014-07-19T10:55:22.346-07:00I found the analytic solution for p(n|data) and th...I found the analytic solution for p(n|data) and the mode is at 72.18 which confirms your result. My result for the mode with the uniform prior is at 75.9. I really don't know what is causing this difference but I suppose it is from my BUGS model. Oh well...João Netohttp://www.blogger.com/profile/05560718055133816500noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-79206650998399083122014-07-18T11:52:13.861-07:002014-07-18T11:52:13.861-07:00Yes, good point. Thanks!Yes, good point. Thanks!Allen Downeyhttp://www.blogger.com/profile/01633071333405221858noreply@blogger.comtag:blogger.com,1999:blog-6894866515532737257.post-22451905110809727862014-07-18T11:27:50.159-07:002014-07-18T11:27:50.159-07:00I think it should be noted that the assumption tha...I think it should be noted that the assumption that all error probabilities are the same makes a big difference. Suppose I have a program with 100 bugs, 5 of which are easy to find, and two testers who are bad at finding bugs. Tester 1 finds 10 bugs (including all 5 easy ones), tester 2 finds 10 bugs (including all 5 easy ones), and the intersection is the 5 easy ones. The Lincoln index would estimate 20 bugs and your method would find something similar (I'm assuming, I didn't implement the code). In general, the more the probabilities differ from error to error, the more these methods will underestimate the total number of errors. They do work quite well, though, when error probabilities are uniform. Bryanhttp://www.blogger.com/profile/15975987326012523061noreply@blogger.com