Repeated Measures ANOVA versus Linear Mixed Models.

March 9, 2017

You want to measure performance of the same individual measured over a period of time (repeated observations) on an interval scale dependant variable, but, which procedure to use?  So we are looking for an equivalent of the paired samples t-test, but we want to allow for two or more levels of the categorical variable i.e. pre, during, post. The Repeated Measures ANOVA [SPSS: ANALYZE / GENERAL LINEAR MODEL / REPEATED MEASURES] is simpler to use but sadly its often not as accurate and flexible as using Linear Mixed Models (SPSS: ANALYZE / MIXED MODELS / LINEAR). Reminder that the Linear [READ MORE]

Statistical Modeling: A Primer (by Kevin Gray)

March 7, 2017

Interesting article by Kevin Gray at Cannon Gray (http://cannongray.com) Model means different things to different people and different things at different times. As I briefly explain in A Model’s Many Faces, I often find it helpful to classify models as conceptual, operational or statistical. In this post we’ll have a closer look at the last of these, statistical models. First, it’s critical to understand that statistical models are simplified representations of reality and, to paraphrase the famous words of statistician George Box, they’re all wrong but some of them [READ MORE]

Significance Testing – Three Concerns

January 19, 2017

Some words of caution about significance testing by Kevin Gray: “I’ve long had three major concerns about significance testing. First, it assumes probability samples, which are rare in most fields. For example, even when probability sampling (e.g., RDD) is used in consumer surveys, because of (usually) low response rates, we don’t have true probability samples. Secondly, it assumes no measurement error. Measurement error can work in mysterious ways but generally weakens relationships between variables. Lastly, like automated modeling, it passes the buck to the machine and [READ MORE]

Getting the hang of z-scores

January 4, 2017

If we have a sample of data drawn randomly from a population with a normal distribution, we can assume that our sample distribution also has a normal distribution (provided a sample size of more than 30). If we have a mean of zero and a standard deviation (SD) of 1, then we can calculate the probability of getting a particular score based on the frequencies we have. To centre our data around a mean of zero, we need to subtract each individual score from the overall mean, then divide this by the standard deviation. This is the process of standardisation of raw data into z-scores. This [READ MORE]

Research questions and hypotheses?

December 9, 2016

When doing proposals or client reports, we often refer to “research questions” and “research hypotheses” (sometimes used interchangeably). What is the difference?   Research Questions do NOT entail specific predictions (magnitude or direction of the outcome variable) and are therefore phrased in question format that could include questions about descriptives, difference or association (or relationship). These assist the researcher to choose the most appropriate statistics techniques. Lets look at each:   1. Research questions that relate to describing [READ MORE]

Test statistics and significance

November 27, 2016

A test statistic such as the F-test, t-test, or the χ² test, all look at the proportion of variance explained (effect) by our model versus variance not explained (error) by our model. Our model can be as basic as a mean score which is calculated as the sum of the observed scores divided by the number of observations included. If this proportion is >1, then the variance explained (effect) is larger than the variance not explained (error). The higher this proportion the better our model.    Lets say it is 5 (rather than 1), so the proportion of explained variance (effect) is 5 times [READ MORE]

Outlier cases – bivariate and multivariate outliers

August 14, 2016

In follow-up to the post about univariate outliers, there are a few ways we can identify the extent of bivariate and multivariate outliers:   First, do the univariate outlier checks and with those findings in mind (and with no immediate remedial action), follow some, or all of these bivariate or multivariate outlier identifications depending on the type of analysis you are planning.  _____________________________________________________ BIVARIATE OUTLIERS: For one-way ANOVA, we can use the GLM (univariate) procedure to save standardised or studentized residuals. Then do a normal [READ MORE]

Statistical Power Analysis

July 15, 2016

(Statistical) Power Analysis refers to the ability of a statistical test to detect an effect of a certain size, if the effect really exists. In other words, power is the probability of correctly rejecting the null hypothesis when it should be rejected. So while statistical significance deals with Type I (α) errors (false positives), power analysis deals with Type II (β) errors (false negatives), which means power is 1- β Cohen (1988) recommends that research studies be designed to achieve alpha levels of at least .05 and if we use Cohen’s rule of .2 for β, then 1- β= 0.8 (an 80% [READ MORE]

Skepticism in Social Media

June 15, 2016

I was talking this morning with someone about which blogs that review products and/or services are the most popular around my part of the world – Asia. I consulted Google Search but could not come up with an answer. I did however come across a recent report (June 25, 2012) by Kristen Sala, Senior Manager, Electronic Media at Cision (a public relations software and media tools firm) that lists the Top 50 independent “Product Review Blogs” in North America. Mama-B Blog is first, followed by Computer Audiophile, and 48 others.  Still, I could not find much information [READ MORE]

Data Assumption: Multicollinearity

May 13, 2016

Very brief description Multicollinearity is a condition in which the independent variables are highly correlated (r=0.8 or greater) such that the effects of the independents on the outcome variable cannot be separated. In other words, one of the predictor variables can be nearly perfectly predicted by one of the other predictor variables.  Singularity is when the independent variables are (almost) perfectly correlated (r=1) so any one of the independent variables could be regarded as a combination of one or more of the other independent variables. In practice, you should not [READ MORE]

Is my Likert-scale data fit for parametric statistical procedures?

April 8, 2016

We’re all very familiar with the “Likert-scale” but do we know that a true Likert-scale consists not of a single item, but of several items which under the right conditions – i.e. subjected to an assessment of its reliability (e.g. intercorrelations between all pairs of items) and validity (e.g. convergent, discriminant, construct etc.) can be summed into a single score. The Likert-scale is a unidimensional scaling method (so it measures a one-dimensional construct), is bipolar, and in its purest form consists of only 5 scale points, though often we refer to a [READ MORE]

Variables and their many names

March 12, 2016

Many of the statistical procedures used by marketing researchers are based on “general linear models” (GLM). These can be categorised into univariate, multivariate, and repeated measures models.  The underlying statistical formula is Y = Xb + e where Y is generally referred to as the “dependent variable”, X as the “independent variable”, b is the “parameters” to be estimated, and e is the “error” or noise which is present in all models (also generally referred to as the statistical error, error terms, or residuals). Note that both [READ MORE]
1 2 3 6