A Miscellaneous Lot of Thoughts on Quality Scoring

Share this article:
deepthoughtsFB
Over the past year or so I’ve had many conversations about contact center quality assurance — some of which I’ve shared in past articles. As I was scanning my list of blog post ideas, I realized I had some miscellaneous thoughts that fit under the quality umbrella and it’s about time we discuss them. Without wasting more time on a clever introduction for this article, here are my thoughts.

Does extra credit belong on a quality form?

I’ve spent significant time of late helping support teams moved their quality forms from Google Forms and spreadsheets into a quality application and one of the unforeseen obstacles we’ve had to work through is extra credit or bonus questions. The tool we use doesn’t allow for a quality score to be greater than 100% so that more or less eliminates these questions from the conversation.

There still exists a philosophical issue to discuss. As I did further investigation I learned that on some forms agents could earn extra credit for “Going above and beyond for the customer” or for “WOWing them.” In my opinion we can thank a handful of customer service news stories that somehow conditioned us to think that customer service isn’t great unless it goes viral. When I asked our leaders how often extra credit was earned the answer was almost never.

We must never forget that if we can consistently be friendly, provide thorough and accurate responses, and set appropriate expectations without compromising the trust, safety, and security of our customers, we’re likely providing incredible customer service.

So what’s the point of extra credit? I recommend leaving it off of your quality forms and instead setting a reasonable but high standard and empowering agents to consistently do their job well.

What about auto fails and penalties?

On the other side of extra credit are penalties and automatic failures. Penalties subtract points beyond the weighted value of a question and automatic failures zero out the review score entirely.

I know this sounds severe but I think there’s a time and place for such questions. It does require some careful thought, however. We need to first consider the items on quality forms that have greater impact on a customer interaction than others. These things are the opposite of what I mentioned earlier. They include being downright rude and unprofessional, giving out wrong or incomplete information, and compromising security or compliance practices (Think PCI and HIPAA).

For those things, it’s important that agents understand the severity of a miscue. Why? Here’s a short list:

  • It puts customers and the business at risk for lawsuits, hacks, and severe penalties from regulating bodies.
  • It increases the likelihood of customer churn.
  • It renders that customer interaction pointless because the customer will have to contact the company again to get their issue resolved.

All three of these problems cost companies a lot of money.

What’s my approach? I generally avoid penalties on quality forms because I haven’t worked with a quality tool that allows for the subtraction of points. I’m also not a huge fan of elaborate points systems — but I’ll get to that more in the next point. I recommend either increasing the weight of a given question to have a greater impact on the overall score or, if it’s severe enough, make it an auto fail where the agent earns a zero for the interaction.

What’s the point of quality scores anyway?

One of my favorite conversations is with contact centers leaders who don’t see a point in quality scores at all. The reason is that they’d rather review customer interactions and discuss what the agent did well and where they can improve. They believe that quality assurance is about coaching agents and making continuous improvement over time and that supersedes a score.

Any seasoned coach has seen the look in an agent’s eye when they’re handed a quality review and they go straight to the score. If it’s a passing score, they might completely check out and not think they have anything to learn. If it’s a failing score, the fight or flight response engages in their brain, they immediately go on the defensive, and any coaching opportunity is lost. That’s enough for some to either stop showing scores altogether or at least go over all of the feedback with the agent before showing a score.

Before you throw out quality scores altogether, however, remember that the greatest value of a quality score is the ability to track improvement over time. Should be able to drill down and track how often individuals and teams are properly executing on the various objectives on the form during customer interactions. For example, you might find that Bob properly greets customers 99% of the time on the phone but he only verifies customers’ identities 75% of the time. Clearly verifying customers should be a priority when coaching Bob. If you take a step back and see that the team as a whole is struggling to consistently verify customers, it’s time to invest in some recursive training.

Where does quality calibration come into play?

In a past article I wrote about the importance of a quality calibration process to ensure that everyone is aligned and scoring customer interactions the same way. The more people you have scoring calls, the more essential this process becomes. This ensures that your quality data is accurate.

My prefered method for calibration is where all attendees review and score customer interactions ahead of time. The calibration meeting then becomes a time where everyone comes together, compares results, and discusses how they differed. Out of that discussion comes an agreed upon score and the average difference between each reviewer’s rating and the calibrated score is the variance. The goal of these sessions should be to reduce that variance.

I find that this method gives the most honest rating possible and is a true gauge of where everyone on the team stands.

I’ve been in enough quality assurance discussions both inside and outside of our organization to know that there’s a broad range of opinions on these topics. If you have any thoughts, I’d love to hear them. Leave a comment or question below and we’ll discuss further.

Jeremy-Watkin-Blog-Profile
Jeremy Watkin
Director of Customer Experience
FCR

Jeremy Watkin is the Director of Customer Experience for FCR. He has more than 18 years of experience as a customer service, customer experience, and contact center professional.  He is also the co-founder and regular contributor on Customer Service Life.  Jeremy has been recognized many times for his thought leadership.  Follow him on Twitter and LinkedIn for more awesome customer service and experience insights.

1 Comment. Leave new

Thomas Siebert
12/10/2018 4:58 pm

Hi Jeremy
Enjoyed your QA thoughts.
I wanted to offer a different perspective re ‘purpose’ of QA. I’ve led single call centers of 100 agents and 5 centers (on/offshore, owned/sourced) with 1,400 agents handling 25,000 calls a day.
Yes, QA is for trending performance but more important, QA programs create a corporate measurement standard of expectations.

If the call center operation is consistently delivering a well calibrated score of 80% or better, then they are delivering the corporate standard, approved and directed policy and procedure in the manner prescribed.

Importantly, operations is in control of the quality performance. If QA is at 80% or better, then operations is delivering the the customer experience the executive team deserves. This means operations is responsible for QA performance and the executive team for the customer experience. If the executive team wants to improve/change the customer experience then they need to modify/improve the policy/procedure that operations is charged with delivering.
If QA measurement is eliminated then the call center operations team has no standard. Customers, and how they present their perspective of the customer experience can vary dramatically by season, by product type, by age, by skill, by experience and many other variable attributes.
Without a standard, the agent will never know their very specific target when their performanced is judged by the highly variable customer instead of a binary well designed and simple QA program with just 10 or less smart questions.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>