Setting the Record Straight on CSAT, NPS, and CES
Have we lost sight of something when it comes to asking our customers for feedback and gauging what’s truly important in the customer experience? Let’s first look at the three most popular customer survey metrics and how they’re measured and then I’ll share a way to approach the results more holistically.
Customer Satisfaction (CSAT)
CSAT is a measure of whether or not the customer was satisfied with the company, the customer service experience, or both — and in the contact center we tend to see CSAT most frequently. This is due in large part to the fact that it’s embedded in many of the cloud-based customer engagement platforms on the market, typically garnering a high rate of response.
Question formats can tend to vary a bit. Some of these include:
- Rating the company — Are you satisfied with our company?
- Rating the customer service department — Were you satisfied with the support you received?
- Rating the individual agent — Were you satisfied with the support Bob provided?
Survey timing also varies. Some companies survey customers immediately after each customer service transaction whereas others send it once the case or issue has been solved.
Finally, methods for measuring vary. While the industry standard is typically a 5-point scale, allowing for a neutral response, others modify to a 3-point scale, a simple yes or no, or a series of smileys or emojis. While practices vary somewhat widely, we typically convert to a percentage for the sake of consistency.
Net Promoter Score (NPS)
NPS measures a customer’s willingness to recommend a company to a friend or colleague. The standard for measurement tends to be more consistent than CSAT, using a 0 to 10 scale where 0 means highly unlikely to recommend and 10 means highly likely. To calculate the score, one takes the percentage of detractors, the 0 to 6 responses, and subtracts that from the percentage of promoters, the 9 and 10 responses. We found that many of our clients track this metric but few give any visibility to the contact center or hold them directly accountable.
Customer Effort Score (CES)
Finally, the up-and-comer in the group — Customer Effort Score. At a very basic level, CES is a measure of the amount of effort it took for a customer to get their problem solved. If you’d like to go into more depth on this metric and the whys behind it, I highly recommend reading The Effortless Experience.
While it’s still somewhat new, we have a few clients that have moved to CES recently to measure customer service. One quick public service announcement about CES is that the scale has changed a few times and the current standard for measurement is a 1-7 scale with 1 being high effort and 7 being low effort. The score is then the percentage of the 5-7 ratings. For more reading on this, I often point to this article by Gartner as the authority on CES.
The 3 most important things about customer surveys
I need to confess something. As a customer service leader there’s a shot I’ve been guilty of slapping the latest metric onto my post-interaction survey without much intentionality. Before you do the same, here are three things to consider.
1. Be strategic about how, where, and when you survey customers.
Each of these metrics potentially has a place in helping you gauge the success of your customer experience. As you consider asking your customers these questions, be deliberate about the key touchpoints or places in the journey where you want to measure success. Here are a few examples:
- CSAT — You want to know specifically how friendly, knowledgeable, and helpful your contact center agents are. A great time to ask customers is right after the case is solved or the call or chat has ended while the experience is still fresh in their mind.
- CES — You’ve done an initiative around first contact resolution and empowered your contact center agents to handle more issues without having to escalate tickets. Perhaps you’ve invested a ton of time in improving self-help resources so customers don’t have to contact support at all and you want to measure the success of these initiatives.
- NPS — Recommendations and referrals are huge in many businesses so maybe it makes sense to ask the NPS question after they go through the sales cycle, or after they use the product, or after you haven’t heard from them for a while. At any one of these points you may want to understand the customer’s perception of your company and work to manage that perception.
2. Tie quality assurance to your customer metrics
I’m looking directly at contact center leaders when making this point. I recommend staying away from incentivizing your agents to drive customer metrics higher. If you’re considering this, think through whether any of these can be gamed in such a way that you no longer get an accurate representation of where you truly stand with your customers.
That being said, at FCR we have worked to shift our quality assurance process from simply telling our agents what they did right or wrong on an interaction. Very simply, we add one of these metrics, often CSAT, to our quality form and ask our quality coaches to help their team see the impact the service they provide has on the overall customer experience — good or bad.
3. Listen to the voice of the customer (VOC)
Going back to the Wall Street Journal article, I hope you saw this sentence:
“Mr. Bennett (Former CEO of Intuit and Symantec) said he came to learn that the score is less meaningful than the open-ended question that can follow the rating.”
That’s the point of the whole article. If you do nothing else, the act of truly listening to the answers customers give when you ask them one of these survey questions is key. Here’s a quick four-step process for listening to the voice of the customer and doing something about it:
- Collect results — Move beyond the scores and be sure to gather both the verbatim comments and an issue type, if selected during the interaction, to hear what customers are saying about their experience. While you’ll mostly be looking at negative feedback, customers will occasionally provide constructive feedback on positive responses so don’t ignore those.
- Do the work of scrubbing the feedback — While the customer service provided may be an issue, there are times when the agent did everything perfectly and the customer was still unhappy. When reading the feedback look for what truly impacted the score using these buckets: People, Policy, Product, or Process. Note that more than one of these buckets may be part of the root cause.
- Close the loop — Remember that, especially with dissatisfied customers, these are opportunities to take a difficult situation and try to make it right. By following up with the customer you won’t win ‘em all but you’ll win some. Also, be sure to close the loop with your agents, coaching them on they can do differently the next time.
- Gather insights — Key to improving your customer experience is understanding what’s driving any negative experiences, quantifying those issues, and then taking action to improve them.
By being intentional about the questions you ask your customers about their experiences with your company and then taking meaningful and strategic action based on their feedback you’ll maximize your return from your voice of customer program. While the decision to measure CSAT, NPS, and/or CES may vary depending on the nature of your business, the holistic approach you take to those results shouldn’t vary much.
Director of Customer Experience
Jeremy Watkin is the Director of Customer Experience for FCR. He has more than 18 years of experience as a customer service, customer experience, and contact center professional. He is also the co-founder and regular contributor on Customer Service Life. Jeremy has been recognized many times for his thought leadership. Follow him on Twitter and LinkedIn for more awesome customer service and experience insights.