Everybody who conducts surveys and collects customer feedback with the objective of measuring or improving Loyalty or Customer Satisfaction will at least consider “identifying drivers”, if not executing a method to do so. There are many ways to identify the drivers of an outcome metric such as Loyalty or CSAT. Some are more common, some less, and there are many varying schools of thought on how to implement techniques. What’s the right way of doing it? It all depends. It depends on the business objective, the need for information, and how the findings will be used.
Nevertheless, for the majority of the surveyors who implement Loyalty and CX/CSAT studies for the purpose of monitoring and improving these outcome metrics, there is usually one best and reliable way of identifying the drivers of the outcome metric. A technique to measure derived importance of predictors/influential factors, as opposed to measuring stated importance of these factors in influencing the outcome metric.
Stated vs. Derived Importance
There are two main ways of exploring drivers of an outcome metric in a survey:
- In a stated fashion, i.e. by explicitly asking the respondent about importance of a number of attributes in driving the outcome metric;
- In a derived fashion, i.e. by calculating the importance of each attribute in driving the outcome metric through specific analyses using the metrics collected in the survey, without asking respondent explicitly about the importance.
Both methods have their advantages and disadvantages and likewise have their place in identifying drivers. However, in Loyalty/CX research where it is important to identify the key drivers for improving the key outcome metric, we find derived importance methods more advantageous in creating more reliable results for business decision making.
Stated Importance: “How important is…?” Will Yield Everything Equally Important
There are multiple drawbacks of stated importance measurement in CX surveys that mislead the results and make it challenging to obtain reliable, strong conclusions and provide clear direction for business decisions.
Everything is important
In a stated importance measurement, where we ask the respondent to rate the importance of each factor on a scale, it is very likely that the respondents will assign similar, and usually high, points on the scale for most of the attributes. Consumers will want to communicate to the provider that everything is important for them and they don’t want to lose any of the existing features. This in turn reduces the discriminatory power of research to stand out specific attributes from the group as most important drivers.
Table Stakes or Drivers?
Another challenge with stated importance is related to consumers indicating high importance for factors that are commonly and successfully delivered anyway. For instance, in a survey about online banking app usage, where security has not been a concern for years now, if security is one of the attributes to be rated for importance, most respondents will assign high importance to it. Should the banks put a lot of efforts to improving security of online banking apps further? This would be a waste of money and time, where there is no major weakness or issue related to this particular area. On the other hand, ease of use and speed of transactions may be more critical features of the app today; but with security being rated as highly important as these other critical areas, the user of the data would not get the clear support in the data to make business decisions on what to prioritize.
There are other stated importance measurement methods such as MaxDiff and QSort, which are very useful in identifying priorities through trade-off of features. However, these methods are most commonly utilized in marketing, product, and brand-type research, and are less adequate in identifying priority areas of focus to improve customer experience.
Derived Importance: Linking Customers’ Overall Perception to Customers’ Actual Experience Yields True Key Drivers
Calculating the derived importance of attributes through analysis will provide the business a more relevant measure of importance in driving the customer’s perception and outcome metric ratings. This is done by asking the respondent to rate the brand or the specific experience on a number of attributes using a performance or agreement scale, and then calculating the importance scores using these attribute ratings and the outcome metric rating (e.g., likelihood to recommend, likelihood to return/repurchase, overall satisfaction, etc.). This way, the analyst is able to relate the customer’s outcome metric rating to the ratings of the specific attributes to identify what the customer may have in mind when rating the outcome metric. This method provides a more direct relationship between the rating provided for the outcome metric and the ratings provided for the driving attributes, per customer and at the aggregate level. As a result, this method is more likely to provide results that identify those key factors that influence the customer’s overall perceptions at that time, based on their specific experiences. Therefore, the business can clearly determine what factors they need to focus on and meet business goals.
For example, consider what happens after a customer rates various aspects of an experience with a vehicle and is asked about the likelihood to purchase the same vehicle in the future. If the customer had really poor experiences with the service on the vehicle, regardless of how much they loved the vehicle itself, their likelihood to repurchase rating may be low. This would indicate that for this customer, service would be a major driver in repurchasing this vehicle brand or model. In fact, this was my exact experience! I had to give up on the car I’d had for 10 years (and really liked, by the way) due to poor service performance and move onto another brand.
On the other hand, if the customer was simply asked to “Rate the importance of the following items in deciding whether to repurchase the same vehicle”, they probably would have rated many aspects including features of the vehicle and the service as highly important. The car maker then would have no way of knowing that service experience strongly impacts the loyalty of its customers. But through the derived importance analysis, the car maker will be able to single out the service as an area of focus. This is especially so if this customer’s service-related issues are not isolated but impact many customers’ satisfaction and loyalty.
More can be easily found on the topic of derived importance and various driver analysis methods on the Internet and through our researcher and analyst colleagues. And the conversation about pros and cons of different driver analysis methods is always an interesting topic for many corporate researchers as they are identifying the best methods to use for their business goals.
Identifying True Drivers for Business Focus: Importance of Question Placement in Surveys
While we’re on the topic of helping the business successfully determine where to focus its efforts through derived importance analysis, I would like to briefly touch on the importance of question placement, which also impacts the accuracy in identifying the “true” drivers of an outcome metric. Let me explain:
If the goal is to understand what drives the outcome metric, let’s say Loyalty, so that the business can make decisions on how to invest in those specific areas, then it is best to capture the customer’s perceptions or ratings in the most unbiased fashion possible. It is best to start the survey with the outcome metrics (Loyalty rating, in this case), and follow with experience/attribute ratings. It is important not to ask specific questions relating to the experience before the outcome metric. You don’t want to bias the customer’s loyalty rating by framing them to think about the specific experiences we are going to ask about in this survey.
A school of thought recommends asking the experience questions first and then the outcome metric. This indicates we want the outcome metric to be based on these experience attributes. However, this would have similar undesired effects as stated importance, because this may lead to an exaggerated weight of importance on experience elements included in the survey, ignoring other factors that may impact that outcome metric outside of those asked in the survey. Realistically, in a survey, we cannot cover all areas that impact a customer’s loyalty decisions in the real world as they are making purchase decisions. We can technically only expect to identify drivers of loyalty up to a certain percentage. However, by biasing the loyalty ratings by asking experience attributes before loyalty, we would create the misleading effect that the majority of loyalty decision will be based on these attributes only. This is not reflective of real-life decision making. Therefore, start surveys with the outcome metrics, measuring top-of-mind loyalty, followed by predictor variables, to more realistically measure the weight of each of these predictors in driving loyalty.
Derived Importance on Top-of-Mind Outcome Metrics is the Way for CX Business Decisions
In short, when a CX survey is conducted to monitor a key outcome metric, the most accurate and realistic results will come from derived importance methods where the outcome metric is measured top of mind (without biasing). This way, the business will be able to capture the most realistic measure of the outcome metric as well as its true drivers and make decisions on how to invest resources that will drive the desired business outcome. There will be less time wasted reviewing results that indicate “four wheels on my car is most important”, and more time focusing on “enabling the service department diagnose and resolve problems more successfully” to secure more loyalty.