# Data Assumption: Homogeneity of variance (Univariate Tests)

When comparing groups, their dispersion (variances) on the dependent variable should be relatively equal at each level of the independent (factor or grouping) variable (and neither should their sample sizes vary greatly across the groups). In other words, the dependent variable should exhibit equal levels of variance across the range of groups. Homogeneity of variance is the univariate version of bivariate test of homoscedasticity, and the multivariate assumption of homogeneity of variance-covariance matrices.

**Who cares**

Both t-test and ANOVA are sensitive to a violation of the assumption of homogeneity of variance. However, when group sample sizes are fairly equal, ANOVA remains robust in the event of small and even moderate departures from homogeneity of variance.

It is important because the variance of the dependent variable that is being explained should not only be concentrated across a limited range of the groups. If the variance is unequal across the groups, then we refer to it as being “heteroscedastic” which is often caused by an overall skew in the data and often can be blamed on a too small sample size (reminder that the larger your sample size the closer it moves to normality).

A heteroscedastic dependent variable lead to more accurate predictions at certain levels of the independent variable than at other levels, which effects the standard errors. Note that this applies only to the dependent variable so whether the independent variable is metric or categorical, that is irrelevant.

A heteroscedastic dependent variable lead to more accurate predictions at certain levels of the independent variable than at other levels, which effects the standard errors. Note that this applies only to the dependent variable so whether the independent variable is metric or categorical, that is irrelevant.

Violations of the assumption of homogeneity of variance may distort the shape of the F-distribution (ANOVA’s) to such an extent that the critical F-value no longer correspond to the cut-off chosen e.g. of 5% (p‘<‘.05). So even though you report significance at .05, it may in fact be at only at .10 or worse (which leads to serious Type I errors – falsely rejecting the null hypothesis).

Note that with only two groups, a significant difference in variance is not as serious as with multiple groups.

**How to Test**

There are several ways to determine a violation of homogeneity of variance. A detailed description is beyond the scope of this post:

- Levene’s test is the most commonly used with a single metric dependent variable. With a multivariate procedure (where we have more than one metric dependent variable, e.g. MANOVA), it involves variance/covariance matrices so we need to use the Box’s M test to test for homoscedasticity. In either case, the Levene’s and Box’s M tests should be non-significant.
- The Welch test could be better when group sample sizes are highly unequal.
- The Brown & Forsythe’s test of homogeneity of variances is also generally more robust than the Levine’s test when group sizes are highly unequal and with highly skewed data. The recommendation is that when the Levene’s test is significant (indicating a violation of the assumption of homogeneity of variance), then use Brown & Forsythe’s test and if this is also significant, then accept and report the results of the latter.
- The Bartlett’s test of homogeneity of variance has largely been replaced by the Levene’s test.
- Hartley’s F-max test is favoured by some researchers, while others argue that it is extremely sensitivity to violations of normality.
- The GLM procedures provides a scatterplot of “spread versus level plot”. The plot should have no obvious pattern (so random data points is evidence of no violation of the assumption).
- Simple box-plots is an easy to grasp graphical way of checking for the lack of homogeneity of variances. Do side-by-side box-plots of each group and if the width of the boxes does not vary markedly by group, it suggests no violation of the assumption. Error-bar plots serve the same purpose.
- Another easy visual is to compare the mean, variance, and skewness of groups in your statistics programme’s “explore” or “descriptives” command. The “compare means function” also allows for an easy comparison of variance across groups.

Note that while the Levene’s test is the most popular for univariate analysis (and Box’s M test for multivariate), both are less effective with large to very large samples (at which time it could falsely indicate significance). In additional to the Levine’s test, look at the “spread vs level plot” (as described above). With large samples it is recommend to use visuals such as graphics and simple comparisons of descriptives across groups rather than a single significance test such as Levene’s or Box’s M.

**How to fix the problem**

As the violation of the assumption of homogeneity of variance is likely caused by a small sample or by the violation of normality, the fixes are obvious. Increase sample size and if data is still violating normality, then follow the remedies of non-normality which includes data transformations.

______________________________________

/zza46

______________________________________

/zza46