In so many statistical procedures we execute, statistical significance of findings is the basis of statements, conclusions, and for making important decisions. While the importance of statistical significance (compared with practical significance) should never be overestimated, it is important to understand how statistical significance relates to hypothesis testing.
A hypothesis statement is designed to either be disproven or failed to be disproven. (Note that a hypothesis can be disproven (or failed to be disproven), but can not proven to be true).
Hypotheses relate to either differences (e.g. t-tests for differences in mean values) or relationships (e.g. correlations for differences between the slope of a line and zero) – although these are not exclusive as they are highly related to each other. We select specific test procedures depending on the number of variables, the characteristics of our data, and whether we are comparing two or more means, standard deviations, or variances.
- H0: The null hypothesis refers to no differences, no changes, and no relationships between the independent and dependent variables (alternatively expressed as “invalid”, “void” and “amounts to nothing”).
- Ha: The alternative hypothesis refers to the effect of the independent variable on the dependent variable which results in differences, changes, and relationships.
- Type I: If we reject what is actually a true null hypothesis, so we recognise differences or relationships that don’t exist. This is also known as a false positive, so wrongly reporting a condition that does not exist.
- Type II: If we fail to reject a false null hypothesis, so we don’t recognise differences or relationships that do exist. This is known as a false negative so we don’t report a condition that actually exits.
Measuring affect size and statistical power analysis
Practical significance and effect size measures
Statistical power analysis