Dave has been manually reading thousands of responses from the monthly NPS survey and just #knows customers always complain about high prices. When he’s going through customer comments from that NPS survey he will always be looking for mentions of ‘price’ or ‘high prices’.
Jane’s organization has 1M+ followers on Instagram and she spends all day every Thursday randomly sampling hundreds. She does this because she knows social media is a source of raw, unfiltered customer feedback where everyone has an opinion about her company’s online store and aren’t afraid to share their opinions with other Instagram users. She knows her company has had an issue all year with a poor range of products. Jane’s experience is driving her to specifically look for comments related to her organization’s poor range of products.
What do Dave and Jane have in common? Their experience is driving them to introduce human bias into their analysis projects. You would assume experience is a good thing (and it is, don't get me wrong!) However, it also creates situations where the interpretation of what is and is not important becomes inherently subjective.
What are the early warning signs of cognitive bias?
First, let's take a step back to understand human bias. According to Verywell Mind, a cognitive bias is a systematic error in thinking that occurs;
“when people are processing and interpreting information in the world around them and affects the decisions and judgments that they make.”
If you’re doing any of the following then chances are you’re injecting bias into your projects:
Assuming everyone at your organization knows the way to improve X is to do Y
When your analysis is not able to answer every question executives are asking, you argue that the problem is having not surveyed enough customers
You’ve had a few meetings with a team/department and feel like those briefs were enough for you to fully understand the problems customers are facing in those area
Specifically looking for keywords you know are important and reporting insights around these core themes (especially if you’ve seen these themes before in previous work).
Human bias in the context of feedback analysis refers to how the analyst’s personal preferences and past experiences influence the creation of themes, tags, and categories. If analysts know in-store experience is terrible across their organization, they’re more likely to look for evidence that validates this in customer feedback. They will seek this out first and often miss other themes which are new and emerging.
Examples of bias in customer insights
Confirmation bias - looking for keywords related to what the analyst thinks are the main drivers of customer behavior.
Analytical bias - identifying customer behavior drivers from small amounts of collected customer feedback (eg. ~50 - 300 survey responses).
Cognitive bias - the analyst's framing of issues is wrong from the start.
Outlier bias - quoting customer feedback in reports as representative of the typical customer when in fact that experience is the exception rather than the rule.
Why you should aim to remove human bias
Customer insights teams are increasingly being asked to provide advice on what to do next. It’s no longer enough to analyze customer feedback, make sense of it, give a report so relevant parties are informed and then move onto the next project. What I’ve seen as a common factor with successful customer insights teams is they are highly opinionated. If your organization is making a $50M decision based on your view of what they should do next, you better make sure you’re fully understanding the problem from all angles otherwise it’s bad for you when it inevitably goes wrong.
How do you eliminate bias?
Human bias creeps into customer feedback analysis regardless of the number of humans analyzing the data. More isn’t better. If you have 1,000 analyst reviewing just 10 comments each you still have the problem of 1,000 different viewpoints trying to reconcile their findings into four or five major themes and up to twenty smaller secondary themes. This also doesn’t solve the problem of identifying emerging issues that aren’t a problem for your organization now, but will be a major issue by end of year.
So what’s the answer? How do you mitigate the bias in customer feedback analysis?
Qualitative focus group sessions
This can be done by CX teams or product designers but there’s also no reason why customer insights teams should be locked out. Create two focus group sessions, one for NPS detractors and another for NPS promoters and ask each group what they loved and what they didn’t love about their experience. Also ask what your organization could have done differently. Transcribe their words, analyze and create themes just based on what they say. If the people you chose to participate are indeed a random slice of your customer base this should provide a good comparison. Look at your original theming of the dataset with these new themes. Is anything different?
Get technology to automatically remove ambiguity
When you have people manually coding feedback, it’s not surprising that results get interpreted in multiple ways. Analysts are only human. Technology such as Qualtrics, InMoment, and Medallia already do the heavy lifting in the CX space but aren’t great at the analysis part.
Customer insights platforms such as Kapiche allows insights analysts to identify the real themes contained in a dataset - not because you told your insights platform to look for those themes, but because it had the smarts to automatically code all those themes itself based on what customers are actually saying.
Leverage structured data
Open-ended, unstructured customer feedback is insights gold but only if it’s looked in correlation with structured data. This allows you to slice and dice themes into segments for deeper, more contextual insights. As previously mentioned, one of the ways humans fall into the cognitive bias trap is assuming you know everything about the issues impacting customers.
For Jane, yes a poor range of products is a major issue impacting customer experience at her ecommerce store, but if she segmented her data she might be challenged. What if she found this was more of an issue for Females aged 16-24 who spend more than $500 per checkout session? What if in addition to this she discovered Male NPS detractors were on the upper end of the NPS detractor curve (with an av. NPS score of 5.8), they complain about this issue but this theme is not impacting their average spend. These deep, contextual insights are possible when you segment your customer data.
Where to next?
Having assumptions is human – not something to root out and destroy. Acknowledging the existence of human bias when analyzing customer feedback is the most crucial step. Then you need to decide if this is something you can live with or something which has to be addressed.
A few highlights from this article:
Don't assuming everyone at your organization knows the way to improve X is to do Y
Seek to understand all issues in full - either manually if you think you can do this, or let technologies already available in market to do that heavy lifting
Don't rely on internal meetings to inform your understanding of key customer issues.
Segmentation is your friend
CX teams can design the world’s best experiences for their customers but if they don’t have accurate insights these experiences risk falling flat.