Measuring importance

Lets have a quick review of how we measure importance e.g. of attributes in purchase decisions or in customer satisfaction, etc.
Traditionally we looked at stated importance but generally we give preference to derived importance.  So we’ve been taught.
Stated importance can be divided into the constrained methods (e.g. a 5-point rating scale, constant sum methods, Q-sort, and rank order) and unconstrained methods which are unbounded rating scales and open-ended questions. 
On the other (better) hand, derived importance can be established via correlation-based methods such as multiple regression, logistic regression, partial least squares, “true driver analysis”, and choice-based methods such as conjoint analysis and multinomial logit (MNL).

Each of the above has pro’s and con’s. However, one of the most common methods still being used is the bounded interval rating scales (which often are more ordinal than interval), where respondents may consistently indicate that all items are important with little discrimination (low variance). 
While most of the methods listed above (e.g. forced rankings and derived importance) are better and more discriminating than stated importance via interval rating scales, far superior to these are trade-off scales and choice models such as conjoint, paired-comparison analysis and maximum difference scaling (MaxDiff). 

MaxDiff is easy to administer and analyses is based on the customer’s choice or trade-off rather than on the typical rating-scale responses where all items can be indicated as very important.
When you look at the pros and cons of the different scales available, you may wonder (like me) why MaxDiff is not more commonly applied. Whichever scale you employ, think twice before considering rating scales to measure importance.
This is a highly debatable subject, I know. That’s why I’m writing about it.
Further Reading:
Graphics source and further reading: