Data Assumptions: Univariate Normality


BRIEF DESCRIPTION: 
As one of the most basic data assumptions, much has been written about univariate, bivariate and multivariate normality. An excellent reference is by Tom Burdenski (2000) entitled Evaluating Univariate, Bivariate, and Multivariate Normality Using Graphical and Statistical Procedures.
.
A few noteworthy comments about normality:
.
1. Normality can have different meanings in different contexts, i.e. sampling distribution normality and model error distribution (e.g. in Regression and GLM). Be very careful which type of normality is applicable.
 
2. By definition, a dichotomy is not normally distributed, however, it is perfectly acceptable to use dichotomies for procedures requiring a normal distribution as long as the split is less than 90:10 or 10:90. Best is not to use dichotomies as dependents in procedures such as OLS Regression which assumes a normally distributed dependent variable.

3. For procedures with categorical independent variables (e.g. factors), the dependent variable should be normally distributed in each category of the independent variable(s)
 
4. In samples larger than about 30, the sampling distribution tend to be normal regardless of the population distribution.
 
5. In all but univariate procedures, normality most often refer to the error / residuals distribution, rather than to the actual sample (or population) distribution.
 
6. Some procedures (such the F-test in ANOVA) are quite robust even for moderate departures from multivariate normality assuming a kurtosis that is non-extreme (from -1 to +1), a sample size that is not too small (eg.<20), and fairly equal samples across the groups.
.
Who cares
The assumption of normality is one of the most fundamental assumptions in statistical analysis as it is required by all procedures that are based on t- and F-tests.  Fortunately, some tests such as t-tests and ANOVA are quite robust to a violation of the assumption of normality. While univariate statistical tests assume univariate normality, the multivariate tests assume both univariate and multivariate normality
 
Why is it important
The t- and F-test statistics will be invalid if the assumption of normality is significantly violated. If data does not violate this assumption, then parametric tests can be employed. If violated, then non-parametric tests are more suitable. The assumption is more critical with inferential statistics than with descriptive statistics.

How to test for Univariate Normality
We have several ways of detecting a deviation from univariate normality. Some are graphical, while others are statistical. Often the graphical plots of data is a quick and easy way of spotting significant violations from normality.
    1. Histogram.
    2. Stem-and-leaf-plots.
    3. Boxplots should have fairly equal length whiskers with the mean-line through the middle of the box.
    4. Normality probability plots (PP/QQ) as well as PP/QQ detrended. Note that for univariate plots use raw data while for bivariate (e.g. Regression) and multivariate analysis, use error plots. 
    5. Kolmogorov-Smirnoff test and/or the Shapiro Wilke test should be non-significant (e.g. p>.05).
    6. Skewness measures should be close to zero but acceptable if ±1 but better if within ± 0.5.
    7. Kurtosis measures should be close to zero but acceptable if ±2 or even ±3 but better if within ± 0.5.
    8. Mean/Median/Mode should all be close to equal to indicate perfect normality.
Note that these normality assessments should be done by factor group where applicable, so either add the “factor group” as a grouping variable or use the “split file” option in your statistics program.

While the Kolmogorov-Smirnoff (K-S) goodness-of-fit test is a common test of normality, note that the larger the sample size (i.e. n>100) the more likely the K-S will be significant, meaning it indicates even a small deviation from normality as significant.  With large sample sizes, it is recommended to rely more on graphical detection of univariate normality.
 
How to fix the problem
It is important to detect and remedy any significant violations of normality as non-normality often causes problems with a range of other data assumptions. Just as larger sample sizes increase the statistical power by reducing sampling error, larger samples also reduce the severity of non-normality. So, a first remedy would be to determine if the sample size is the leading cause of non-normality, and therefore increase the sample. Generally a sample less than 50 (or especially lower than 30) will have confounding effects on non-normality, while samples over 200 will have diminishing effects. The second best, though possibly a dangerous remedy, is data transformation which, dependent on the type of skewness and kurtosis, include procedures such as applying transformations via the square root, logarithms, inverse, etc. 
_____________________________________________________

Burdenski, Tom (2000), Evaluating Univariate, Bivariate, and Multivariate Normality Using Graphical and Statistical Procedures, Multiple Linear Regression Viewpoints, Vol. 26(2). 
_____________________________________________________
/zza50
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments