When people talk about quality assurance, the first thing that comes to mind are QA applications in manufacturing. As consumers, we all faced the issues of the quality of American-made products when higher quality products from overseas made their way into our everyday lives. Consumers knew that if they wanted to buy a trouble-free car, Japanese was the way to go. American manufacturers responded with vigor and have turned the tables in the quality game. Competition brought out the best because consumers demanded higher-quality products at reasonable prices.
In contrast, the service industry is in its infancy in defining quality. However, we all agree that the cornerstone of a quality initiative in the call center industry has been, and will remain, the human interaction with customers over the phone. During the past decade, a plethora of technologies, such as digital recording, have been developed for monitoring the quality of customer contacts over the phone, but it has been aimed at managing compliance as opposed to managing performance.
The manufacturing approach to quality assurance does not apply to the telephone. Telephone interaction is a dynamic environment that is difficult to measure. When we measure compliance, it is relatively easy to score whether a customer service representative read a script verbatim or utilized the correct rebuttal. It is far more challenging and subjective to measure whether a CSR spoke at the right tempo, used the right intonation or listened to a customer. I wouldn't expect that the health inspector at a restaurant would have the tools or knowledge — nor would I trust his judgment — to tell me whether or not the food tastes good. I leave that up to the local restaurant columnist.
My theory is that it is the subjective areas of quality that really make the difference in performance. And, because these areas are difficult to measure, the returns also can be much higher. There is a reason that good food critics are paid handsomely for their opinion, while the food inspector is a government employee. So what are the key elements to ensure that our quality “tastes good?”
First and foremost, one must understand what your goal is for a monitoring initiative. There needs to be a preset action plan with a fixed target. There are many facets and goals to strive for, including improving sales per hour, product knowledge, tone of call and certainly compliance issues. However, it needs to be crystal clear what the goal from the initiative is, and it needs to be communicated to all people that touch the campaign.
Not all monitoring initiatives are the same. Don't try to put a square peg into a round hole. Specific scoring by program and measurement criteria should be created for each calling scenario. The scoring criteria for a wine connoisseur are going to be different than the criteria for a beer taste tester. We certainly can't expect the scoring criteria to be the same for an inbound and an outbound program.
Trend your monitoring results. This provides both statistical and marketing feedback to ensure the set goals are met and the project is constantly improving. Without trending, process improvement is impossible. You cannot know where to go if you don't know where you started. We also may be surprised by what we are capable of, but unless it is measured, we will never know.
Calibration. Don't put the quality aspect in place and assume it will remain constant. Programs change, customers change and call center employees change. Ensure that all interested parties are providing the same message and listening for the same things. Calibration should include all parties involved from the center level to management. If the program is being performed by a third-party agency, this process can get muddy, especially if a third-party monitoring firm is included in the mix. However, Web-based technology is available to help the process work more effectively because all interested parties have access to each others' comments in real time. Utilizing a Web-based application creates a built-in calibration tool with significantly less effort than traditional calibration methods like conference calls. This method is also a great tool for a client to monitor a third-party quality assurance firm to be sure that specific goals are being met — in essence performing QA on a QA firm.
Remember who is being monitored. When an outsourced monitoring organization, or the client, is monitoring, only a small percentage of the calls are being listened to. The feedback should be in real time to help coach the specific teleservices sales representatives monitored. However, the true coaching should be with the QA personnel at the center level. We all know that politics is local and elections are won or lost at the grass-roots level. The same is true with telemarketing. The goal of a monitoring session should be not only to improve the quality of the TSR being evaluated, but also to hold the center accountable for improvements for all the people working on a program. The goal should be to train the people ultimately responsible for the end product on what to listen for and how to improve. These are the people that will implement the plan and QA should be used to measure whether improvements are being implemented at the center level.
Quality assurance can sometimes be an elusive objective. Defining that objective is the most important part of the process. Is the objective compliance, performance or both? Obviously, making sure the food is not poisoned is a whole lot easier than measuring whether or not the food tastes good. So, performance enhancement often is ignored or poorly executed because of the inherent challenges. However, if we are making the effort to monitor for compliance, wouldn't it make sense to take the extra step and monitor for performance as well? By investing enough time, and with the right system and effort, the goal is achievable.