A Bayesian perspective on interpreting statistical significance

This site illustrates why a P-value cannot be interpreted in a vacuum. Suppose a hypothesis test results in P < 0.05. Even if the null were true, we would still see statistically significant results by chance alone about 5% of the time. But this is relatively infrequent. So if we see a small P-value, we hope that the null was false and that our P-value is evidence of that.

Unfortunately, that reasoning can go very badly. The computations on the linked site show that the prior probability of our hypotheses drastically changes the “evidence” that our P-value provides. Keep in mind that what we want to know is \mathrm{Pr}(H_{0} \mid P < 0.05). Unfortunately, our significance only tells us \mathrm{Pr}( P < 0.05 \mid H_{0}). (I’ve used \mathrm{Pr} to indicate probability so it doesn’t get confused with the letter P which we’re reserving for our P-value.)

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s