# Describing Differences

### Which test / procedures? How do we decide?

July 2, 2017

With so many statistical procedures available, how do we decide which tests are best to address our research objectives? (several posts deal with this topic).   First and foremost, the decision as to which statistical procedures to apply to the data should be made BEFORE the design of the data collection instrument (e.g. the questionnaire), and not AFTER data has been collected. Plan ahead so that your analysis are entirely focused on addressing your research objectives and NOT to address your data.  Too many researchers remain guilty of waiting to see the data so they can decide what to [READ MORE]

### Chi-square (χ²) Test of Independence

May 22, 2017

BRIEF DESCRIPTION Whereas the One-sample Chi-square (χ²) goodness-of-fit test compares our sample distribution (observed frequencies) of a single variable with a known pre-defined distribution (expected frequencies) such as the population distribution, normal distribution, or poisson distribution, to test for the significance of deviation, the Chi-square (χ²) Test of Independence compares two categorical variables in a cross-tabulation fashion to determine group differences or degree of association (or non-association i.e. independence).  Chi-square (χ²) is a [READ MORE]

### One-Sample Chi-square (χ²) goodness-of-fit test

January 9, 2016

BRIEF DESCRIPTION The Chi-square (χ²) goodness-of-fit test is a univariate measure for categorical scaled data, such as dichotomous, nominal, or ordinal data.  It tests whether the variable’s observed frequencies differ significantly from a set of expected frequencies. For example, is our observed sample’s age distribution of 20%, 40%, 40% significantly different from what we expect (e.g. the population age distribution) of 30%, 30%, 40%. Chi-square (χ²) is a non-parametric procedure.   SIMILAR STATISTICAL PROCEDURES: Binomial goodness-of-fit (for binary data) [READ MORE]

### Analysis of Covariance (ANCOVA)

May 13, 2015

BRIEF DESCRIPTION The Analysis of Covariance (ANCOVA) follows the same procedures as the ANOVA except for the addition of an exogenous variable (referred to as a covariate) as an independent variable. The ANCOVA procedure is quite straightforward: It uses regression to determine if the covariate can predict the dependent variable and then does a test of differences (ANOVA) of the residuals among the groups. If there remains a significant difference among the groups, it signifies a significant difference between the dependent variable and the predictors after the effect of the [READ MORE]

### One-Sample Kolmogorov-Smirnov goodness-of-fit test

February 8, 2015

BRIEF DESCRIPTION The Kolmogorov-Smirnov (K-S) test is a goodness-of-fit measure for continuous scaled data. It tests whether the observations could reasonably have come from the specified distribution, such as the normal distribution (or poisson, uniform, or exponential distribution, etc.), so it most frequently is used to test for the assumption of univariate normality. The categorical data counterpart is the Chi-Square (χ²) goodness-of-fit test. The K-S test is a non-parametric procedure.     SIMILAR STATISTICAL PROCEDURES: Adjusted Kolmogorov-Smirnov Lilliefors test (null [READ MORE]

### Why ANOVA and not multiple t-tests? Why MANOVA and not multiple ANOVA’s, etc.

September 9, 2013

ANOVA reigns over the t-test and the MANOVA reigns over the ANOVA. Why?   If we want to compare several predictors with a single outcome variable, we can either do a series of t-tests, or a single factorial ANOVA.   Not only is a factorial ANOVA less work, but conducting several t-tests for each predictor separately will result in a higher probability of making Type I errors. In fact, with every single t-test there is a chance of a Type I error. Conducting several t-tests compounds this probability. In contrast, a single factorial ANOVA controls for this error so that the probability [READ MORE]

### Measuring effect size and statistical power analysis

October 3, 2012

Effect size measures are crucial to establish practical significance, in addition to statistical significance. Please read the post “Tests of Significant are dangerous and can be very misleading” to better appreciate the importance of practical significance. Normally we only consider differences and associations from a statistical significance point of view and report at what level e.g. p<.001 we reject the null hypothesis (H0) and accept that there is a difference or association (note that we can never “accept the alternative hypothesis (H1)” – see the [READ MORE]

### Which test: Compare MORE THAN TWO DEPENDENT groups (Paired, Matched, Same respondent groups)

July 20, 2012

When the research objective is to compare more than two dependent groups, which means they are paired, matched, and thus the same respondent groups in a pre- post-test, we have a choice among different statistical procedures, depending on the following variable characteristics:   Number of variables:  [Unless where otherwise indicated] One dependent variable and one independent categorical variable (more than two levels or groups)   Examples:  Are the means / frequencies (on the dependent variable) of the same respondents over more than two different time periods significantly [READ MORE]

### Which test: Compare TWO DEPENDENT groups (Paired, Matched, Same respondent groups)

July 19, 2012

When the research objective is to compare two dependent groups, which means they are paired, matched, and thus the same respondent groups in a pre- post-test, we have a choice among different statistical procedures, depending on the following variable characteristics:   Number of variables:  One dependent variable and one independent categorical variable (two levels or groups)   Examples:  Are the means / frequencies (on the dependent variable) of the same respondents over two different time periods significantly different such as in a pre- post test? Are [READ MORE]

### Which test: Compare MORE THAN TWO INDEPENDENT groups (Unpaired, Unmatched, Different respondent groups)

July 18, 2012

When the research objective is to compare more than two independent groups, which means they are unpaired, unmatched, and thus different respondent groups, we have a choice among different statistical procedures, depending on the following variable characteristics:   Number of variables:  [Unless where otherwise indicated] One dependent variable and one independent categorical variable (more than two levels)   Examples:  Are the means / frequencies of more than two independent groups of respondents significantly different? When the dependent variable is BINOMIAL / BINARY / [READ MORE]

### Which test: Compare TWO INDEPENDENT groups for differences (Unpaired, Unmatched, Different respondent group)

July 16, 2012

When the research objective is to compare two independent groups, which means they are unpaired, unmatched, and thus different respondent groups, we have a choice among different statistical procedures, depending on the following variable characteristics:   Number of variables:  One dependent variable and one independent categorical variable (two levels or groups)   Examples:  Are the means / frequencies of two independent groups of respondents (e.g. males vs. females) significantly different on the scores of the dependent variable? When the dependent variable is BINOMIAL / [READ MORE]

### Which test: Compare a single group mean or frequency to a hypothetical / known value or proportion

July 16, 2012

When the research objective is to compare a single group mean or frequency to a hypothetical / known value or proportion (such as an action standard or a norm), we have a choice among different statistical procedures, depending on the following variable characteristics:   Number of variables:  One dependent variable   Examples:  Is our mean customer satisfaction score significantly different from the industry average (or action standard) of e.g. 4.6? Is the 54/46 gender proportion in our sample significantly different from the population’s age proportions of 51/49?  When [READ MORE]
1 2