Monitoring Takes On CRM Qualities
The tool, which is from Dictaphone, is an extension of its existing computer-telephony integration/monitoring product line. It is currently in beta-testing while the company works out interface and customization issues.
What was striking was that Dictaphone has combined traditional monitoring with a full-scale data-mining tool. It is using a mining engine perfected by a company outside the call center field. This company created data-mining software for sports teams -- it's used by National Basketball Association teams to chart every single action within a series of basketball games and then discern patterns that lead to successful, unsuccessful or otherwise interesting outcomes. These would be patterns that the coaching staff would never be able to see for themselves.
The mining engine coordinates the game films, in digital format, so that coaches can visualize the pattern in game situations. They can zoom right to the appropriate moments in the videotape. Sound familiar? It's much like a supervisor zeroing in on a particular phone call or sequence of calls based on a pattern of agent or customer behavior.
Which is where we get to the monitoring tool. Without dwelling on Dictaphone's tool, the idea of data mining is a fascinating one when it comes to the call center. Often, when people speak of finding patterns within call center-related data, it's based on someone outside the call center sifting through the customer data records, usually from a marketing point of view.
But what if you could take all that customer information and combine it with two other important streams -- the performance data from a monitoring system and the call detail information from the automatic call distributor? Then you'd have a really useful pile of data, if only you could make sense of it. That's where the data-mining tools come in, especially since most of the available monitoring tools now include some capability for "screen scrape," or gathering the data that pass through the agent desktop application during the call.
Let's stipulate that data mining is a good idea. Other people can argue over its merits. What I want to get to is the potentially awesome mind shift that it puts into the center: For it to work well, you literally have to record every call.
In practice, that's not so different from common random monitoring. Most people will set up their system to record a certain percentage or number of calls at random intervals, and of those recorded calls, some or all will be used for evaluation purposes. There are, of course, applications that require total recording for auditing purposes, but those are pretty specialized. Indeed, in the typical call center, the notion of total monitoring is something of a new idea.
Storage is nearly limitless and inexpensive now, so that's not an issue. Screen scrape poses a different problem -- the obstacle to total recording is not storage space but local area network bandwidth. Pushing screen scrape beyond random causes degradation of the responsiveness of anything else traveling over that LAN, including the agent desktop applications.
So how will people react to the idea of total monitoring? Is there going to be a discernible agent response to something that's more pervasive and less clearly based on quality assurance and performance evaluation factors?
Monitoring is a lot less controversial than it used to be. I think that's because of the increased use of "fairness" tools, automated systems that are used to ensure random recording and calibrated scoring of agent performance. Those tools take a lot of the sting out of having a supervisor listening in over your shoulder. We also have progressed quite far from the days when monitoring was literally an "over-your-shoulder" experience, with the supervisor jacking a headset right into the agent console and listening in on the call. If anything could ensure a jittery and atypical interaction, it was that.
You can make a pretty fair case that total monitoring is just as fair as random recording. You can't argue that you're being singled out if it's happening everywhere all at once.
It still needs to be explained to agents, however, that total recording doesn't mean total evaluation of every call -- that there are still random factors involved in the selection.
It seems a good time to reiterate the pros and cons of monitoring in the first place. The benefits are:
You create standards of performance and a way to assess whether those standards are being met. Not only does this give you better measurements of your workforce, but it tells you if your standards are absurdly high and therefore unattainable.
It allows you to correlate training regimens with those people who actually need them. It also helps you build teams by discerning patterns of complementary skills among reps of different abilities.
It creates an audit trail for the customer record allowing you to see exactly how an interaction was handled and find patterns in how customer interactions go off the rails.
Automated, random monitoring ensures that no agent gets an unfair burden of monitoring, and it ensures that promotions and incentives are doled out according to impartial measurements.
Capturing the screen of the agent, as well as recording the audio part of a call, shows you more about what that agent knows and how he or she follows procedures.
And as noted, technology now becoming available lets you search larger pools of data -- coordinated among audio recordings, screen scrapes and added ACD data about call duration and transfer pathways -- for patterns that tell you more about your center as a whole, not just an individual agent's performance.
On the con side, remember:
People do not like monitoring. No matter how you present it to the rep, monitoring can be seen as intrusive and unfair. All the productivity and audit-trail benefits in the world won't make someone like being observed and recorded.
It's possible to abuse monitoring by not calibrating the responses and scoring among supervisors.
It's not going to be worth much to record calls automatically and then not use one of the many good quality-assurance software tools for consistent scoring.
Those things being said, it's encouraging to see all these data pathways coordinated in a way that gives equal weight to the performance of the agent during the actual call in assessing customer satisfaction. People talk about customer relationship management, but that category pays scant attention to the reams of data contributed by even traditional random monitoring.