# Reporting statistics in client reports – A few thoughts

A few thoughts on reporting statistical findings (and no, this is not academical jargon – its is the smart way to report the findings to your clients):

1. As marketing researchers we generally follow the APA (American Psychological Association) style of reporting so when reporting e.g. significance (p‘<‘.05), never put a ‘0’ in front of the decimal (i.e. p‘<‘.05) if the number cannot be greater than 1.00. So p‘<‘.05 is correct, and p‘<‘**0**.05 is not the proper reporting style. Statistical output of significance testing, effect size (e.g. Cohen’s d), and statistical power all vary between 0 and 1, and as such all reported without a 0.

2. The value of p can never be zero. It is asymptotic, so even if the value in the output shows p=.000 (because the real value could be .00000024…), report it as p‘<‘.0001 because we know that the value is somewhere past that .000. We just don’t know if it is 1 more decimal or 1,000 more decimal places.

3. In hypothesis testing, note that if you fail to reject the null hypothesis that does NOT mean you can accept the null hypothesis as you can never prove the null hypothesis is true (so you can never accept the null hypothesis). If you fail to reject the null hypothesis by a small margin, do not report it as “almost significant”, and likewise, if you reject the null with a very small p-value (e.g. p‘<‘.001), do not report it as “highly significant” as in hypothesis testing, the findings are either significant or not significant, and you either reject or fail to reject the null. So make sure to speak the right language.

4. In several posts here on *IntroSpective Mode*, I have placed focus on the importance of practical significance in additional to the commonly used statistical significance (and its dangers) by calculating and reporting on effect size and statistical power. Here is an example on how to report the findings:

*“The difference is significant at p‘<‘.05, with an effect size of 0.69 which is a moderate (and typical effect), and the power is 0.90 (well above Cohen’s suggested 0.80) so the probability that I make any error type is remote. So I am very confident in saying there is a significant, moderate, and correctly identified difference between my two test groups.”*

5. Reporting z-scores the APA-way:

Example: “Standardised z-scores were computed for the raw customer satisfaction scores. For the raw score 9.2, z = 2.05. This z-score tells us that the specific score was well above the average customer satisfaction scores.”

6. Much can be written here about the best ways to report statistics in your client reports. While we obviously don’t want to look like we are statistics Ph.D’s, some clients are savvy enough to appreciate proper statistics reporting while others actually do understand how to interpret it. Either way, do report it briefly in the body of the report and in a bottom end “Statistical Findings” or the “Detailed Results” section. You may not only impress your boss but you most certainly will gain trust among your clients (even if they don’t understand it – but make sure you do in the event that they ask you to explain). Here are the bare minimum reporting required in any significance testing:

- The value of the test-statistic e.g. t=5.45 or r=.25
- The Degrees of Freedom and for Chi-Square also the N-value e.g.
*df*=3, N=35 - The Significance value (
*p*) e.g. p<.001 - The direction of the finding by stating which sample mean is the larger. In correction analysis, only if the statistic is significant, then state the sign (+ or -) of the correlation.
- The effect size.