Coursera Learner working on a presentation with Coursera logo and
Coursera Learner working on a presentation with Coursera logo and

The P value, originally calculated probability, is the probability of finding the observed results, or more extreme results, when the null hypothesis (H0) of a study question is true – this definition of “extreme” depends on how the hypothesis is tested. P is also described in terms of rejecting H0 when it is actually true, but it is not a direct probability of this state.

The null hypothesis is usually a “no difference” hypothesis, e.g. no difference between group A and group B blood pressures. Define clearly a null hypothesis for each study question before the start of the study.

The only situation where you should use a one-sided P value is when a large change in an unexpected direction would have absolutely no relevance to your study. Such a situation is unusual; if in doubt, use a two-sided P-value.

Use the term level of significance (alpha) as an indication of a pre-celta probability and the term P-value is used to indicate a probability that is calculated after a given study.

The alternative hypothesis (H1) is the opposite of the null hypothesis; in other words, this is usually the hypothesis you want to investigate. For example, it asks “is there a significant difference (not due to chance) in blood pressure between groups A and B if we give group A the test drug and group B a sugar pill?” and the alternative hypothesis is “is there a difference in blood pressure between groups A and B if we give group A the test drug and group B a sugar pill”.

If your P value is lower than the chosen level of significance, then you refuse the null hypothesis, i.e. you accept that your sample gives reasonable evidence in support of the alternative hypothesis. This does NOT imply a ‘significant’ or ‘important’ difference; it is your responsibility to decide when you consider the relevance of your result in the real world.

The choice of the level of significance at which you reject H0 is arbitrary. Conventionally, the 5% (less than 1 in 20 chances of being wrong), 1% and 0.1% (P < 0.05, 0.01 and 0.001) levels have been used. Such numbers can give a false sense of security.

If we were in the ideal world, we would be able to define a “perfectly” random sample, the most appropriate test and a definitive conclusion. We just can’t. What we can do is try to optimize all stages of our research to minimize sources of uncertainty. In presenting P values, certain groups find the use of the asterisk evaluation system and the quotation of the P value useful:

P < 0.05 *

P < 0.01 **

P < 0.001

Most authors refer to P < 0.05 being statistically significant and P < 0.001 being statistically very significant (less than one chance in a thousand to be wrong).

The system asterisk avoids the term “significant”. Note, however, that many statisticians do not appreciate the asterisk system when it is used without showing P values. As a rule of thumb, if you can quote an exact P value, then do so. You might also want to refer to an exact P value quoted as an asterisk in text narration or contrast tables in other parts of a report.

A word about the error at this point. Type I error is the false rejection of the null hypothesis and type II error is the false acceptance of the null hypothesis. As a reminder to help: to think that our cynical society refuses before accepting.

The level of meaning (alpha) is the probability of type I error. The power of a test is one minus the probability of Type II (beta) error. Potency should be maximized in the choice of statistical methods. If you want to estimate the sample size, you need to understand all the terms mentioned here.

This table shows the relationship between power and error in the hypothesis test:

 DECISION
TRUTHAccept H0Reject H0
H0 is true:correct decision Ptype I error P
 1-alphaalpha (significance)
   
H0 is false:type II error Pcorrect decision P
 beta1-beta (power)
   
H0 = null hypothesis  
P = probability  

Please refer to one of the general texts listed in the reference section if you are interested in more details on probability and sampling theory at this point.

It is necessary to understand the confidence intervals if you intend to quote P values in reports and documents. The statistical referees of scientific journals expect the authors to cite confidence intervals more prominently than P values.

Languages

Weekly newsletter

No spam. Just the latest releases and tips, interesting articles, and exclusive interviews in your inbox every week.