Learn / Blog / Article

Back to blog

5 things that may ruin your customer satisfaction surveys

Whatever business you're in, it's crucial to keep your finger on the collective pulse of your customers. Are they growing more or less happy with you?

Last updated

18 Aug 2022

Reading time

8 min

Share

Customer satisfaction surveys are a good way to gather these insights, and they are the standard method my team at Userlike and I have been using to interview and survey our customers since 2011.

I’m Pascal van Opzeeland, Head of Marketing, and I’ve been with Userlike from the start. In the early days, my colleague Timoor and I were continuously running interviews and surveys with our beta users; today, we have a dedicated Customer Success team in charge of collecting feedback. Because our product is built to serve business-to-customer communication, we’ve always strived to stay in contact with customers ourselves.

RETRO PICTURE OF OUR FIRST OFFICE

In this article, I’m sharing the 5 main things our team has learned to avoid when running customer satisfaction surveys. I’ll use the example of a survey we ran this year and show you why avoiding these pitfalls helped us collect accurate and actionable feedback:

  1. Avoid vague questions

  2. Do not focus on quantitative questions only

  3. Do not ignore response categorization

  4. Avoid selection bias

  5. Avoid collecting feedback if you’re not going to use it

1. Avoid vague questions

“A good metric changes the way you behave.” – Alistair Croll & Ben Yoskovitz, Lean Analytics.

When you're conducting surveys, the quality of your customers' answers flows directly from the quality of your questions. When you're using questions to get actionable insights, you need them to be specific. And that's where most surveys go astray.

In our early days, we would ask our customers questions that were too general to gain any valuable information from. Answers to “How satisfied are you?” gave us some basic insights, but nothing else: when someone answered “very satisfied” or “very unsatisfied,” we didn't actually know what they were talking about. They could be rating their satisfaction in terms of the product, customer service, or delivery, or something else entirely.

How to get specific

From our experience, we’ve learned to circumvent this issue by resorting to specific satisfaction-centric surveys, such as Customer Effort Score (CES)Net Promoter Score (NPS), and the ‘Would You Miss Us?’ (WYMU?) question. Each of these questions digs a little bit deeper into understanding a customer’s satisfaction while still keeping the survey short and straightforward.

Quick summary:

  • The Customer Effort Score was developed to uncover the amount of effort your customer had to invest into getting a specific result – like finding the right product or getting an issue resolved with your support team. The creators of CES found customer effort to be the strongest driver for customer loyalty. CES is more of an organizational or service-related question, but can be tailored to many use cases ("How easy was it to [enter use case]?").

  • The NPS and WYMU? questions, on the other hand, are mostly product-related. To determine your NPS, you would ask your customers how likely they are to recommend your product or service on a 0-to-10 scale. With this, you are indirectly asking about their satisfaction, but in a more actionable way.

  • WYMU? is similar, but it determines how unique your product is to the market and how competitive your brand is among other providers. It offers a better sense of how much your customer base actually needs the product or service you provide, which is crucial strategic information.

At the beginning of 2019, we wanted to understand our customers’ satisfaction with our software. Since we’ve learned from our mistakes, we avoided a vague question such as “How satisfied are you with Userlike?” and ran a Net Promoter Score survey instead:

We sent out our survey in a link in a newsletter. We built the survey with Google Forms because it’s a quick and easy way to customize your survey and review your responses thoroughly (PS: keep reading to see some of the results).

💡 Read more: learn how an NPS software tool like Hotjar can help you understand your customers

2. Do not focus on quantitative questions only

While simplicity is great, when you ask your customers to rate their satisfaction/effort/loyalty on a scale from 1-10, you’ll only get quantitative results.

It's easy to forget to ask the most important question: why? To get this information, I simply add a follow-up question to whichever quantitative survey questions I’ve chosen to include. You can always make these follow-up questions optional, so as not to discourage the people who aren’t interested in writing feedback.

That’s exactly what we did when we sent out our NPS survey. We really wanted to learn the reasons behind how someone responded the way they did, and so we asked them to share:

  • One thing we should quickly improve

  • One thing they really liked about Userlike

After closing the survey, we went through our responses with a fine-tooth comb. Our Customer Success Lead, Anton, collected the themes he saw repeating in the short-answer sections and filed them in a separate spreadsheet (note: here is a handy step-by-step tutorial on how to analyze open-ended questions).

At the next Customer Success meeting, Anton discussed the service-related feedback with the Customer Success team and later met with the Development and Design teams to give them the more technical, feature-related feedback. We wouldn’t have had any of that had we not asked our qualitative questions.

3. Do not ignore response categorization

Your customers are likely not all cut from the same cloth: forgetting to categorize their responses accordingly risks bypassing some valuable insights.

At Userlike, we have a number of ways to categorize our customers: these include industry, team size, product plan, but also the individual role of the respondent within the Userlike account.

These graphs show the various NPS scores we collected from our range of users, depending on their plan. The first one on the left is the total result from all of our respondents. After that, our graphs move from our free plan to the more extensive plans we offer.

We did this by creating the same survey five different times, so that we could send users of the respective plan level to the right survey (see point 4).

This categorization allows us to spot trends in, say, Userlike being rated better by those with an Admin role than those with an Agent role, which could indicate that the UX design of the Agent-facing interfaces could be improved. Back to point 2: the qualitative answers might actually reveal what is working for some and not for others.

4. Avoid selection bias

When going through the results of a customer satisfaction survey, it’s important to question the extent to which they represent your entire customer base. That’s where selection bias comes into play.

Selection bias is a statistical phenomenon in which complete randomization wasn’t achieved in your sample. That means your results may be inaccurately skewed in a particular direction.

MICHAEL FROM CUSTOMER SUCCESS, LOOKING OVER OUR CUSTOMER NUMBERS

The way you collect your responses has a big impact here.

When you send out your customer satisfaction surveys via email, for example, there will be those who open that email and those who don’t. And it’s who these people are and why they do or don't open your email that’s important. Those who open your emails, for example, may be more favorable to your company, skewing your ratings upwards.

We used Mailchimp to send out the NPS survey in a newsletter. It was a helpful tool to use in this situation because it allowed us to see, from each category, the percentage of people who opened our email and the percentage who clicked on the link to the survey.

This is the list of our click-through rate of our German-speaking users (we also made one for our English-speaking users). As we can see, we had a higher open and click rate for our free and team plans than we did with our flex and business plans, and we took this data into account when looking at the results.

Selection bias is a tricky obstacle to overcome, since it can hide in plain sight. Optimally you would use a variety of approaches for collecting your answers (e.g. through email, in-app, post-chat, and randomized calling) and assess whether any collection-based trends are visible. If this isn’t possible, make sure your survey responses are collected in the clearest way possible, so at least you get to see which kinds of customers answered and can assess the kind of effect this could have had on your responses.

5. Avoid collecting feedback if you’re not going to use it

Collecting feedback takes time and effort, and you want to make sure that the results have at least the potential of changing your mind and actions in some significant way. If you’re just collecting feedback for the sake of it, you’re wasting everybody’s time: yours, and that of the people who reply.

Our goal when running the NPS survey was to gather honest feedback from our users that we could apply to our product. The open-ended questions in the NPS survey were a good way of doing it; but thanks to the numerical question, we also got a quantitative benchmark that we can now use to compare our future performance against.

Most people – including ourselves, when we started – have a vague sense that "things should be measured in order to be improved." In this case, that includes understanding and defining what counts as a good, mediocre, and bad NPS, so we can actually determine if ‘17’ or ‘4’ are good or bad for us, and keep using this knowledge to improve our product.

Taking the next steps

Anything that you measure for your business should ultimately have some sort of impact on how you move on from here. You can use the information that you’ve collected to help strengthen your customer service team, simplify your website, and/or better your product.

Create a purpose for your survey by asking yourself in the early stages: “What do I want to know by sending out this survey?” By focusing on the purpose, you’ll be better prepared to avoid everything I mentioned in this article. After each step of creating your survey, you can align what you’re doing with the purpose you’ve set for yourself beforehand.

And when you know that each step is supporting the purpose of your survey, you can be sure you’re on the right track towards gaining valuable, actionable feedback.

Related articles

User research

5 tips to recruit user research participants that represent the real world

Whether you’re running focus groups for your pricing strategy or conducting usability testing for a new product, user interviews are one of the most effective research methods to get the needle-moving insights you need. But to discover meaningful data that helps you reach your goals, you need to connect with high-quality participants. This article shares five tips to help you optimize your recruiting efforts and find the right people for any type of research study.

Hotjar team

User research

How to instantly transcribe user interviews—and swiftly unlock actionable insights

After the thrill of a successful user interview, the chore of transcribing dialogue can feel like the ultimate anticlimax. Putting spoken words in writing takes several precious hours—time better invested in sharing your findings with your team or boss.

But the fact remains: you need a clear and accurate user interview transcript to analyze and report data effectively. Enter automatic transcription. This process instantly transcribes recorded dialogue in real time without human help. It ensures data integrity (and preserves your sanity), enabling you to unlock valuable insights in your research.

Shadz Loresco

User research

An 8-step guide to conducting empathetic (and insightful) customer interviews

Customer interviews uncover your ideal users’ challenges and needs in their own words, providing in-depth customer experience insights that inform product development, new features, and decision-making. But to get the most out of your interviews, you need to approach them with empathy. This article explains how to conduct accessible, inclusive, and—above all—insightful interviews to create a smooth (and enjoyable!) process for you and your participants.

Hotjar team