# I'm confused about the interpretation of P value in hypothesis testing. I know that we set significa

I'm confused about the interpretation of P value in hypothesis testing. I know that we set significance level as 0.05 which is the threshold we set for this test so that it won't suffer from Type I error by 5%.
And we are comparing P to significance level, does it mean P is the probability of making type I error based on the sample?
You can still ask an expert for help

• Questions are typically answered in as fast as 30 minutes

Solve your problem for the price of one coffee

• Math expert for every subject
• Pay only if we can solve it

Maeve Holloway
Not quite. The p-value is calculated based on the data we actually observed - it's our measure of how "unusual" the data is, assuming that the null hypothesis is true (which usually corresponds to "nothing weird is happening" or "the effect we're trying to identify isn't present" or something similar).
The threshold that we set, often written as α, where we decide that the p-value is sufficiently "weird" to count as evidence against the null hypothesis, is then our probability of a Type I error - it's the probability that, in a universe where the null hypothesis is true, we get data so weird that we think it's false.
###### Not exactly what you’re looking for?
mars6svhym
A $P$-value is the probability that we observe a given result (or something more extreme) assuming that the null hypothesis is true. If we choose to reject the null hypothesis whenever this probability is less than a certain per-determined significance level (say, $\alpha =0.05$), then we will end up rejecting the null hypothesis when it is true for $5\mathrm{%}$ of the samples that we take. Hence, the probability of committing a Type I error is the chosen significance level, $\alpha$.