If statistical significance is found (e.g. p<.001), the next logical step should be to calculate the practical significance i.e. the effect size (e.g. the standardised mean difference between two groups), which is a group of statistics that measure the magnitude differences, treatment effects, and strength of associations. Unlike statistical significance tests, effect size indices are not affected by large sample sizes (as in the case of statistical significance).
As effect size measures are standardised (units of measurement removed), they are easy to evaluate and easy to compare.
The most commonly used standardised effect size measure is Cohen’s d and is not that different from a standardized regression coefficient (both have the standard deviation of the effect as denominator in the formula:
Other effect size measures for categorical independent variables (e.g. the ANOVA family) include Eta Squared (eta2 or h2 or η2) and Partial Eta Squared (h2p or η2p). In fact, there are many effect size measures which can broadly be distinguished as those for measuring differences (the d-family of effect sizes e.g. Cohen’s d) and for measuring association (the r-family of effect size measures which focuses on strength of association such as the Pearson Correlation Coefficient (r), Spearman’s rho, phi, eta [though eta is not recommended], multiple correlation (R), etc.
For dichotomous variables (e.g. chi-square) we have Yule’s Q, Yule’s Y, etc. For nominal variables we can use Phi, Contingency Coefficient, Cramer’s V, Tshuprow’s T, Lambda, and the Uncertainty Coefficient.
For Ordinal data its Gamma, Kendall’s tau-b (and c), and Somers’ d. (note that some are strength measures while other are directional measures).
Remember: It is critically important to consider practical significance when the sample size is large to very large as a statical significance test is most likely to detect a small significant difference which may have no substantive (practical) value.