The not so ultimate question?
1/12/09 / David Kennedy
In a recent edition of Quirk’s Marketing Research Review, there was an article (Article ID: 20081004….requires registration, but it is free) on the Net Promoter Score (NPS). The NPS has long been touted by many as the best way to measure customer satisfaction, and a key characteristic of the system is its simplicity.
In case you haven’t heard of it, the NPS consists of a single loyalty question: “How likely is it that you will recommend [company] to your friend?” There are other versions of the question as well, but essentially you’re inquiring whether they would recommend the company or not. The scale (which can also differ between uses) is a simple 0-10 scale, with 0 meaning “not at all likely” and 10 meaning “extremely likely.” Based on their response, customers are grouped into categories of promoters (a 9 or 10 response), passives (a 7 or 8 response), and detractors (6 and below). To calculate the NPS, you subtract the proportion of detractors from the promoters. The more positive the NPS, the better. To learn more, you can read the book, “The Ultimate Question.”
While I personally like the simplicity of the system, and we have used this question in our own research, we have never relied on only that question (or any other one question, for that matter).
The article’s author, Bob Hayes, suggested using an aggregate of several general satisfaction questions to create an overall satisfaction index that is less prone to error. So, in addition to “how likely you are to recommend?” you would also ask “how satisfied are you overall?” “how likely are you to continue purchasing there?” and so on. Then you would take the average of those questions to create an aggregate score that will give a more precise measurement than any one question alone.
But the real value of the article, in my opinion, was the discussion of general predictors (i.e., general satisfaction and loyalty questions) versus specific predictors (i.e., how likely are you to do some specific action, such as cancel your service). General questions are good at predicting general business outcomes such as revenue whereas specific predictors are better at predicting specific outcomes. A combination of the two approaches can produce the best metrics. The critical thing is to know what you specifically need to measure first, then design the questions around those needs.
Overall, I agree with the article that adding more of the right questions will make the results stronger and hopefully more specific. However, I think it is important to remember that simplicity should not be ignored – for the customer taking the survey or the manager using the results. For management specifically, one of the benefits of the NPS system is its easy to understand metric. While more data is often good, it means nothing if it is not actionable by the end user.
Or you can stop hammering away at your customers with a lot of questions through a survey and build a relationship with them. If you can’t afford to have a person talk to your customers on the phone and personalize emails then you should just go out of business now.
This is challenging work, because the details matter, and because some less-than-obvious factors can influence the utility of the comparisons. Companies in certain businesses may need to make sampling or other analytic adjustments to get accurate, useful data. Lenders, for instance, almost always find that likelihood to recommend doesn’t line up with the behaviors that create economic value until they adjust for credit risk.