Can Your Quality Assurance Team Really Monitor 100% of Calls? Should It?

Share this article:
qa-monitor-header

A client recently posed an interesting question to our team, asking, “Is it possible to quality monitor 100% of our telephone calls?” This is a great question. To answer it well, we first need to understand the time commitment of a contact center quality process, understand the reasons this is important, and then evaluate some of the available options currently on the market.

If you’re not familiar with quality assurance (QA), it’s a necessary process in all contact centers. It typically involves some sort of form where reviewers listen to calls and rate them based on a set of criteria. They then sit down with their agents to review the interactions and coach them on ways to improve.

The numbers likely vary from company to company, but for phone support a QA team might review anywhere from six to a dozen calls in a month per agent. Let’s do some quick math here to see the percentage of the total call volume the quality team is monitoring.

table-1-5-2017

At six calls per month, you’re reviewing just 0.6% of the overall call volume. Now, let’s consider the time commitment required to handle this current QA workload.

table-2-5-2017

Thirty minutes per QA monitor is a lot of time, and it’s important to note that depending on the complexity of your quality process, a reviewer may listen to a call more than once.

Another important consideration in the process is who’s doing the QA monitoring for the team and how much time it takes them. Larger operations might have dedicated quality teams whereas for others, the responsibility falls on supervisors to complete them in balance with their other tasks. Let’s look at what this time commitment involves.

table-3-5-2017

I’ve made a lot of assumptions here but you can see that QA can eat up more than a quarter of supervisor time in the contact center and that’s to review a mere fraction of your overall call volume at 0.6%. At this rate, reviewing 100% of interactions would require a huge expenditure of both time and money. Is it even realistic?

Reasons for 100% of Calls Monitored

It really isn’t outlandish for a client to ask us how they can review 100% of their calls. Here are a few very good reasons they’d want this:

Quality Control

At only a handful of reviewed calls per month, it’s only natural to worry that you might not get a clear picture of the performance of the contact center. It seems reasonable to think that the more interactions we review, the more we can coach our agents to be consistently at their very best. What if you had a way to ensure that the consistency was there on every customer interaction and catch negative patterns before they cause too much harm?

Churn Safety Net

I’ve often told my customer service team members that they may very well be the only representative from our company that a customer ever speaks with. That means if their attitude is off or they give a wrong answer, the relationship with that customer might be forever damaged. The thought of not getting a second chance to make it right with a customer was downright terrifying for me as a manager.

Voice of Customer

Business leaders crave customer insights. They want to know what customers are saying, doing, and thinking. No one has more insight than the frontline customer service team that’s interacting with customers all day, every day. But how do we move from anecdotal feedback to feedback backed with data that illuminates what’s really going on in the customer experience?

Each of these reasons independent of the others makes it understandable why companies would want a solution to review 100% of their customer interactions. The ability to monitor quality, minimize churn, and gain valuable customer insight should be the goal for any organization, right?

Quality Tools Available

Let’s take a look at some of the technologies on the market for QA. I see these fitting into three groups. Also, this is by no means a comprehensive view of the space but a survey of some of the popular ones I’ve encountered.

Quality database

The first and most common tool we see for managing QA is what I would term a QA database. Systems like Scorebuddy and MaestroQA are popular, offering companies the following functionality:

  • Create multiple quality forms for different departments and channels.
  • Tie call recordings to quality monitors for easy review.
  • Manage multiple levels of access. (i.e. Manager, supervisor, agent)
  • Run efficient calibration sessions.
  • Access detailed reporting.

Also, MaestroQA in particular has a terrific integration with Zendesk that allows you to pull tickets right into their system for review.

These systems run somewhere between $5-10 per agent per month, making them a fairly reasonable add on to your existing systems. While you won’t get that much closer to that 100% goal, consider that if you’re still using spreadsheets to track quality — and many companies do — a QA database can free up team to review more interactions and give valuable visibility into the performance of the team.

Speech and Text Analytics

The next group of solutions are more in the category of workforce optimization systems. These systems offer a full suite of services including QA, scheduling, workforce management, and more. They include companies like Verint, Callminer, Virtual Observer, and NICE. Focusing on their QA offering specifically, here are some of the features they offer:

  • Quality database including forms, access levels, calibrations, and reporting similar to the above.
  • Transcription of calls into text and the ability to analyze for patterns and keywords. For example, easily find how often customers mention canceling and drill down to understand the context.
  • Gain insight into customer sentiment to see when customer or agent tone is elevated, allowing you to target interactions where the customer might either be close to churning, or the agent might need additional coaching.

These solutions do much more than I’ve represented here but it’s clear that they offer more functionality that a simple QA database. They are also a much more significant expenditure. To get the most of it, you’ll want to make sure you have folks on your quality and customer experience teams with an analytical skillset.

Automatic Call Grading with Machine Learning

The final set of solutions on the market are harnessing the power of AI and machine learning. Gridspace and Simple Emotion are a couple companies working to make reviewing 100% of calls a reality. The process to do so involves providing them with a quality form along with several thousand call recordings that have been monitored by your QA team. With enough data, their systems can learn to rate calls automatically, eventually without the assistance of humans, giving you complete visibility into the quality of your team.

The systems have already found success in more scripted contact center environments. The real opportunity here for the future is for them to pair a quality database with machine learning so calls are fed into the system and quality teams can train the machines to review the calls. These tools are really at the forefront of what I think is going to be a big key to the future of QA in the contact center.

Recommended Next Steps

As you consider these various technologies and next steps, here are a handful of recommendations that I think are relevant regardless of the size of your organization or budget.

  1. Don’t forget about Kaizen – I love the Japanese word, Kaizen, which means “continuous improvement.” Before going straight from reviewing a handful of interactions to 100%, consider that moving from spreadsheets to a real QA database is a big improvement and allows your quality team to review more interactions — especially for some of your lower performers.
  2. Give your culture attention – In my last column I shared a recipe for consistent customer service. If the desire to review 100% of interactions comes from a place of distrust in your customer service team, it’s time to evaluate your company culture and make sure they’re empowered to deliver great customer service.
  3. You can get a holistic view of the voice of the customer with just a bit of creativity – If your goal is to have visibility into what customers are saying, the more insight you can get from interactions, the better. Start with post-interaction surveys, agent round table discussions, and even the simple tweak of asking your QA team to note customer insights for each call and you’ll get a good picture of what customers are saying.
  4. Like any tool, be sure you use it – With the technologies I’ve shared, you’ll only get value from them if you consistently use them. Be sure to devote the appropriate technical resources for implementation and empower your QA team to move beyond checking boxes on a form and set their focus on making the customer experience better.

To answer the question our client posed at the beginning of this article, yes, it’s totally possible to review 100% of the calls in your contact center. With the assistance of some incredibly exciting technology, this goal becomes more and more of a reality. With the right culture and creativity, however, this might not be as urgent as you might think. Wherever you’re at in this process, don’t forget to stay focused on why you have a quality process. It’s all about continuously improving your customer experience.

Jeremy-Watkin---Retouch-1-square
Jeremy Watkin
Head of Quality
FCR

Jeremy Watkin is the Head of Quality at FCR. He has more than 15 years of experience as a customer service professional.  He is also the co-founder and regular contributor on Communicate Better Blog.  Jeremy has been recognized many times for his thought leadership.  Follow him on Twitter and LinkedIn for more awesome customer service and experience insights.

No comments

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>