It is true that a good customer experience will lead to more business, or better “word-of-mouth marketing”. Certainly, users all know that in the era of social media, a single negative customer interaction can lead to a public relations nightmare, says Deon Scheepers, strategic consultant EMEA – Interactive Intelligence.
All of the contact centre metrics people use to measure “service” are proxies for this most-important-of-all contact centre scores. Service level and average speed of answer (ASA) are maintained because users believe that long wait times lead to customer dissatisfaction.
Abandons are a great proxy for customer satisfaction, because a customer who hangs up is almost always, by definition, not happy with their wait time. Agent quality scores are maintained because users would like to maintain a consistently excellent interaction with our customers, and the agent quality score is the mechanism users use to ensure consistent excellence.
Different flavours of experience metrics
There are as many “best” customer experience metrics as there are customer experience consultants. Different types of metrics can include customer satisfaction, first call resolution, net promoter score, agent quality score, and others.
Internally, companies will focus on experience scores that can vary from other business units that focus on customer scoring. But even if the scores are called the same thing, they will almost always be calculated using different algorithms. This, of course, makes perfect sense as different customers – calling the same company – are contacting our contact centres for different purposes.
The experience should therefore be attuned to the purpose of the contact.
How can planners use customer experience metrics?
Customer experience scores exhibit seasonality, trends, and differences across contact centres. What does this mean to us planning analysts?
Data streams that exhibit this sort of behaviour are similar to many of the other time- series data users typically work with, like contact volumes, handle times, attrition, and shrinkage. We analysts cut our teeth developing forecasts of items that look just like experience data. This means, that users should be able to forecast experience data streams.
This adds yet another dimension of planning. If users forecast customer experience scores by centre and staff group, users can use these new forecasts in a host of ways.
First, users can draw out the week-over-week customer experience trends, simply to view where users are heading. These forecasts then act to set executive-level expectations. If the trends are favourable, users can see that actual expectations are met. If they are trending in the wrong direction, it will show that our given path needs to be adjusted. In effect, this time-series experience data will act as our early warning device.
Similarly, forecasts, and the resulting expectations, serve to soothe executives, too. If users have a traditional seasonal dip in customer experience scores, then users shouldn’t be too alarmed when it comes to pass this year as well. But also, if users expect a seasonal dip in experience scores, users may be able to head it off this year by developing an agent training program in time.
Another great use of a customer experience forecast is as a point of comparison. The best companies view all of their forecasts (volumes, handle times, attrition, shrink and so on) as a baseline for variance analyses.
As weekly performance data is tallied, it can be compared to the forecast.
Any differences between forecasted and actual performance implies that something has changed. If users are forecasting and tracking customer experience scores, any deviation should be noted, explained, and potentially acted upon. In order for this sort of analyses to have any meaning, it must be compared to seasonally adjusted customer experience forecasts.
Customer service forecasts lead to better planning
The final, most interesting use of forecasts of customer experience metrics is as an input into the staff planning process. Interactive Intelligence has heard from several customers that customer experience scores are used to help allocate their calls amongst their competing call centre vendors.
Those companies are actively attempting to improve their customer satisfaction by sending more contacts to those vendors who score best.
Who can blame them? But there is also no reason why a company couldn’t increase staff levels in their centres that also score well. If improving customer satisfaction is important to a company – and the execs all think that it is – then it makes perfect sense to include the customer experience forecasts in staff planning process and decision-making.
It is simple. By developing customer experience time-series data, using this data to forecast expected performance, and applying this forecast to variance analyses and staff planning, users can greatly improve a customer’s experience.