# One-way (Independent) ANOVA

**BRIEF DESCRIPTION**

The One-way ANOVA is an extension of the Two-independent sample t-test as it compares the observed mean on the dependent variable among

In addition to expressing group differences on the dependent variable, we can also express the findings in terms of relationship or association, e.g. “Age (independent variable) has an effect on satisfaction scores (dependent variable), so age is related to satisfaction so therefore we can predict satisfaction from age”. The One-way ANOVA is a parametric procedure.

**more than two**groups as defined by the independent variable. For example, is the mean customer satisfaction score (on the dependent variable) significantly different among three customer groups: adult men, adult women, and children (on the independent variable).In addition to expressing group differences on the dependent variable, we can also express the findings in terms of relationship or association, e.g. “Age (independent variable) has an effect on satisfaction scores (dependent variable), so age is related to satisfaction so therefore we can predict satisfaction from age”. The One-way ANOVA is a parametric procedure.

**SIMILAR STATISTICAL PROCEDURES:**

**Non-parametric counterparts**of the One-way ANOVA include the Kruskal Wallis test, Median test, Jonckheere-Terpstra test, Chi-Square (χ²) test of independence**Two-Independent sample t-test**(limited to a two-group comparison)**Factorial / 2-way ANOVA**(more than one independent variable)**One-way Repeated Measures ANOVA**– and the non-parametric: Friedman test, Kendall’s W, Cochran’s Q (repeated, paired, or dependent measures)**ANCOVA**(the introduction of one or more extraneous variables as a control, referred to as covariates (continuous variables) or blocking factors (categorical variables)**MANOVA / MANCOVA**(the multivariate version of ANOVA and ANCOVA – so 2+ dependent variables)

**CHARACTERISTICS OF THE VARIABLES**

**Dependent variable**: continuous scaled data**Independent variable**: categorical data with more than two groups or levels (referred to as*factors*)

**DATA ASSUMPTIONS**

- Population is normally distributed although it suffices if the sample data does not significantly deviate from a normal distribution
- Independence of observations and groups
- Random sampling from a defined population
- No outliers
- Homogeneity of variance: Variance of the different groups should be roughly equal (measured by the Levene’s test). If the Levene’s test is significant at e.g. p‘<‘.05 then we reject the null hypothesis so we conclude that variances differ significantly between the groups so the assumption has been violated. Note that ANOVA is relatively robust to violations of this assumption if group sizes are fairly equal. If assumption has been violated, report the results for “equal variances not assumed”. While there are no good heuristics for inequality, unequal group sizes becomes more serious with factorial ANOVA where the inequality of sample sizes are confounded across the increasing number of sub-groups.

**WHERE TO FIND IN SPSS?**

- ANALYZE / COMPARE MEANS/ ONE-WAY ANOVA
- GENERAL LINEAR MODEL / UNIVARIATE

**HOW TO REPORT THE FINDINGS?**

If the F-test is significant (e.g. p‘<‘.05) we reject the null hypothesis so it indicates that the mean scores among the groups differ significantly from each other (significant effects).

**Reporting example**: “A one-way independent ANOVA was performed examining differences of age groups on their customer satisfaction scores. A significant difference was found among the age groups (or can be written as “the effect of age on satisfaction scores was significant”, or “the ANOVA yielded an*F-*ratio of…”), F (2, 14) = 5.50, p‘<‘.05. The actual p-value can be reported in lieu of the ‘<‘.05 (e.g. p=.041). NOTE: If p‘>‘.05 you don’t report the value (e.g. p‘>‘.05 or p=.129) but only indicate it as NS (non-significant).**Explanation of the above example**: The F refers to our F-ratio procedure, the 2 in brackets refers to the degrees of freedom (df) for the effect of the model (*between*groups) and the 14 as the degrees of freedom for the residuals in the model (*within*groups). The 5.5 is the actual F-ratio, and the p‘<‘.05 indicates an actual p-value which is less (e.g. .023) than our chosen confidence level of .05.

Note: If you find a significant difference among the group, conduct the appropriate

*post hoc*comparisons and report as: “Post hoc comparisons using the Tukey HSD test indicated that the mean score for the “teens group” (M=3.4, SD=1.2) was significantly different than the “mature group” (M=5.2, SD=0.9). However, the “young adults” group (M= 4.5, SD=2.2) did not significantly differ from either the “teens group” or the “mature group”. You may want to also report the confidence levels and intervals e.g. (M = 4.5, 95% CI [3.90, 5.20]), p =.007.**FURTHER COMMENTS.**

Be careful with any statistical significance tests if you have a large sample, at which time it becomes critical to calculate and report the effect size (Cohen’s d).

_______________________________________________

/zza66

_______________________________________________

/zza66