# A Bayesian perspective on interpreting statistical significance

This site illustrates why a P-value cannot be interpreted in a vacuum. Suppose a hypothesis test results in $P < 0.05$. Even if the null were true, we would still see statistically significant results by chance alone about 5% of the time. But this is relatively infrequent. So if we see a small P-value, we hope that the null was false and that our P-value is evidence of that.

Unfortunately, that reasoning can go very badly. The computations on the linked site show that the prior probability of our hypotheses drastically changes the “evidence” that our P-value provides. Keep in mind that what we want to know is $\mathrm{Pr}(H_{0} \mid P < 0.05)$. Unfortunately, our significance only tells us $\mathrm{Pr}( P < 0.05 \mid H_{0})$. (I’ve used $\mathrm{Pr}$ to indicate probability so it doesn’t get confused with the letter P which we’re reserving for our P-value.)