Dear Subscriber,

Greg Levin’s detailed take on C-Sat surveys follows:

Every day, thousands of people “enter” your contact center. While you hope that 100% of these people receive stellar service and have a wonderful experience, there’s only about a .003% chance of that actually happening. That’s why it’s essential to have a solid transactional C-Sat survey process in place. Without a proper post-contact survey, it’s virtually impossible to keep accurate tabs on customer sentiment, take specific action to improve it and keep disgruntled customers from perpetrating hate crimes against your agents.

Following are the key elements of C-Sat survey design and delivery – those that I’ve seen embraced by leading customer care organizations throughout the world.

Survey Design

How many questions? As a general rule, a transactional C-Sat survey should contain no fewer than 4-5 questions, and no more than 8-9 questions. Contact centers that make their survey too short fail to gather sufficient data for spotting key trends and uncovering customer needs/expectations. Those that make a survey too long find that few customers complete them, or complete them while grinding their teeth and swearing under their breath – which doesn’t bode well for your C-Sat scores.

What types of questions? The best post-contact C-Sat surveys ask customers to rate how satisfied they were with such things as: Their overall service/support experience; the agent’s knowledge; the agent’s professionalism/courtesy; and the speed and clarity with which the agent provided pertinent information. And with first contact resolution being such a key driver of C-Sat, no survey is complete without a question about whether or not the customer felt their issue was fully resolved. NOTE: Be sure to provide opportunities for customers to leave open-ended comments on surveys – after specific questions and/or at the end of the survey – as this enables customers to elaborate on their experience and provide details that can help the contact center take specific actions for service improvement or customer recovery.

What rating scale? For the majority of questions in a post-contact survey, most centers asks customers to rate each area using a scale (e.g., “On a scale of X to Y, with X being the least and Y being the most, how satisfied were you with…?) The big question many contact professionals have is which scale should they use: 1-4? 1-5? 1-7? 1-10? While none of these scales is perfect and none are inherently bad, I have found that leading contact centers tend to go with either a 1-4 or a 1-5 scale. These provide enough of a range to cover the various levels of customer sentiment without becoming too cluttered and confusing. In the 1-4 variation, 1 = very unsatisfied; 2 = somewhat unsatisfied; 3 = satisfied; and 4 = very/extremely satisfied. In the 1-5 variation, the “3” becomes a “neutral” choice, which can be a good addition but can also allow customers to stay on the fence, thus giving the center little insight into whether these customers are a little satisfied or a little unsatisfied. The 1-4 scale forces customers off the fence, and helps to identify customers who might be at risk of defecting to the competition vs. those who are merely at risk of yawning/sighing.

Survey Delivery Options

Live phone surveys. With live phone surveys, a surveyor (typically an objective third-party specialist) calls customers who have very recently completed a call to the contact center and who have agreed to participate in a survey. (NOTE: Most centers ask customers before they reach a live agent if they want to take part in a post-call survey. Waiting until the end of a call to ask customers if they want to complete a survey could result in an unbalanced survey sample, as it will likely be only those who are either very pleased or very displeased with their interaction who agree to take the survey, with perhaps a few exceptions.) The surveyor then conducts the survey, marking the customer’s responses and taking notes. Each call is recorded so that an audio file with the customer’s specific feedback can be shared with the contact center when warranted.

One of the biggest benefits of live phone surveys is that the surveyor interacts in real-time with the customer and thus can gather more detailed data, clarify confusion and ask customers to elaborate on comments. This provides the center with richer and more accurate feedback/ratings than other surveying methods; however, because they do require a live person, live phone surveys are usually more expensive than other methods.

Automated phone surveys (IVR-based). Many contact centers opt to go with automated phone surveys, which are relatively easy and affordable to administer and provide immediate C-Sat data/feedback following a customer contact. Callers who agree to participate are typically routed to the IVR-based survey as soon as their interaction with the agent is complete. The best automated surveys feature a clear and pleasant human voice that asks each question and provides the answer options in a conversational tone. While the goal is not to trick customers into thinking that they are interacting with a live person, the more human and comforting the automated survey sounds, the better the surveying experience will be for the caller. Advanced automated surveys are programmed to elicit comments/elaboration whenever the customer rates an aspect of the call very poorly (e.g., a “1” on a scale of 1-5, with “5” being the highest rating). This enables the center to capture detailed customer sentiment that sheds light on where the center and/or the agent needs to improve.

Email surveys. Like IVR-based phone surveys, email surveys are an easy, automated way to gauge C-Sat immediately (or very soon) after a customer interacts with an agent. In addition, email surveys can simplify evaluation and trend-spotting since all customer comments are already transcribed. Of course, that very benefit can be a bit of a drawback for some customers, who may not want to take the time to write out very detailed explanations when asked an open-ended question. This means that the center might not be receiving the most thorough feedback possible.

Still in all, email-based surveys are an effective C-Sat measurement approach – particularly when companies want to measure customer satisfaction with e-support/online services. Customers who opt to contact the company via email or chat (instead of calling the center to speak directly to an agent) – or who utilize the company’s web self-service applications – are showing that they prefer electronic, web-based interaction. Thus, calling these customers to survey them or inviting them to complete an automated phone survey doesn’t make sense. These folks likely don’t love phones; respect their preference and send them an email-based survey.

Regards,

Rosanne D’Ausilio, Ph.D.
President – Human Technologies Global Inc
3405 Morgan Drive – Carmel, NY 10512
845/222-2455 fax 928/223-6165
rosanne@human-technologies.com
http://www.HumanTechTips.com – tips newsletter
http://www.customer-service-expert.com
http://www.drrosanne.com – blog

*http://knowlagentblog.com/the-elements-of-a-superior-c-sat-survey/