Net Promoter Score: The Most Useless Metric of All
A number of organisations use a customer service metric known as "Net Promoter", first suggested in the Harvard Business Review. Indeed, it is so common that apparently two-thirds of Fortune 500 companies are using the metric. It simply asks a single question: "How likely is it that you would recommend [company X] to a friend or colleague?". The typical scoring for the answer is a one to ten scale, with a value of 9 or 10 considered a "promoter" score, a 7 or 8 a "neutral" score, and a 0 to 6 a "detractor" score. The Net Promoter Score is calculated by subtracting the percentage of responders who are Detractors from the percentage of responders who are Promoters. It is a simple and blunt instrument and it's entirely the wrong tool to use.
To begin with, it fails of the most elementary mathematics. There is nothing to be gained from providing a score that provides an 11 point range from 0-10, yet only calculates a score from values of promoter, neutral, and detractor. In the Net Promoter system, a score of 6 is just as much a detractor as responder who provides a score of 0, despite what should be a glaringly obvious difference in reaction. It is stunning that a journal with the alleged quality of the Harvard Business Review didn't notice this - let alone the authors of the article.
Secondly, it conflates subjective responses with a quantitative value. What does a score of "6" mean anyway? According to the designers of the NPS, it's a detractor, a fail. Yet there is no guarantee that a responder interprets the value that way. In most assessment systems a "6" is a pass - and more to the point a "7" or "8" is considered a distinction grade; the latter would result in a cum laude or even magna cum laude in most universities. But in the NPS, it is merely a "neutral" result. The problem being of course, unless the individual is provided qualitative guidance with the values (which most organisations or applications don't do), there is no way of determining what their subjective score of 0-10 really reflects. Numerical values cannot be translated to qualitative values unless all parties are provided a means for correlation.
Thirdly, a single-value NPS provides no information to act upon. What does it mean that a respondent would or would not recommend a company, product, or service? Even assuming that the graduation is in place that matches values with scale, and qualitative assessment to numerical values, the answers still providing nothing to act upon. Is it the company or service as a whole that has resulted in the evaluation? Is it a part of company or service? Could it be, for a detractor, that the product or service was something that they thought they needed, but actually didn't? Unless the score is supplemented with an opportunity for a responder to explain their evaluation, there is no way that it creates an opportunity for action.
Given these errors, it is perhaps unsurprising that an unmodified "Net Promoter" method of measuring customer satisfaction ranked last in an extensive study by Schneider et al. in terms of predictive capability. Granted, some information is better than no information, and people do prefer shorter surveys to longer surveys. But as designed in its pure form, using a Net Promoter score is almost as bad as not collecting respondent data at all. A short survey which breaks up the item being reviewed into equal composite components, which guides subjective values to numerical values, which provides an opportunity for free-text qualitative information, and which measures metrics along the scale (with mean and distribution) will always be far more effective measurement of both a respondent's satisfaction, and an organisation's opportunity for action. As it is writ, the NPS should be avoided in all circumstances.