Statistical Power

Practical significance and effect size measures

April 20, 2022

If statistical significance is found (e.g. p<.001), the next logical step should be to calculate the practical significance i.e. the effect size (e.g. the standardised mean difference between two groups), which is a group of statistics that measure the magnitude differences, treatment effects, and strength of associations. Unlike statistical significance tests, effect size indices are not affected by large sample sizes (as in the case of statistical significance).    As effect size measures are standardised (units of measurement removed), they are easy to evaluate and easy to [READ MORE]

Statistical Power Analysis

April 27, 2020

(Statistical) Power Analysis refers to the ability of a statistical test to detect an effect of a certain size if the effect really exists. In other words, power is the probability of correctly rejecting the null hypothesis when it should be rejected. So while statistical significance deals with Type I (α) errors (false positives), power analysis deals with Type II (β) errors (false negatives), which means power is 1- β Cohen (1988) recommends that research studies be designed to achieve alpha levels of at least .05 and if we use Cohen’s rule of .2 for β, then 1- β= 0.8 (an 80% chance [READ MORE]

Measuring effect size and statistical power analysis

March 3, 2020

Effect size measures are crucial to establish practical significance, in addition to statistical significance. Please read the post “Tests of Significant are dangerous and can be very misleading” to better appreciate the importance of practical significance. Normally we only consider differences and associations from a statistical significance point of view and report at what level e.g. p<.001 we reject the null hypothesis (H0) and accept that there is a difference or association (note that we can never “accept the alternative hypothesis (H1)” – see the [READ MORE]