Published in General

Avoiding the Seven Deadly Sins of Linkage Analysis

We certainly do not set out in marketing research endeavors to make mortal mistakes with design and analysis, but the complexity involved in linkage analysis is susceptible to errors both of omission and commission. These errors can be avoided often with sufficient planning, communication and program execution.

By linkage we mean the formal, statistical process by which we connect input measures from one data source to output measures from another.  Varying levels of sophistication differentiate “simple” and “complex” linkage.   Complex differs from simple linkage in that it involves a multivariate predictive model of some sort which enables “what if” simulations.

Seven complicating factors threaten the validity of linkage models, particularly of complex linkage models.  Some of these threats are serious enough that, if not anticipated and planned for, they may cause a catastrophic failure of the linkage model.

1.  Level of Aggregation

In all linkages, we need to make sure that the survey and external data are prepared at the same level of aggregation. For example, we might have customer satisfaction survey data for customers who report their time spent waiting in line when visiting a given bank branch. But if the data is stored for the store as a whole, instead of the individual customer, opportunities to connect linkage and improve the experience are lost. In this case, we cannot link the data until we aggregate the respondent-level satisfaction information into a mean score for the entire branch.  Three problems can result when we do this:

  • We may not have enough respondents at the branch level to compute a stable mean score.
  • We may not have enough observations at the aggregated level to run a stable analysis. For example, if we had 1,000 respondents per branch but only 10 branches, that becomes just 10 observations in our aggregated analysis, not nearly enough to allow us to reach conclusions with confidence.
  • When we do aggregate respondent level data together into means, we often lose a lot of the variability and we end up with very “flat” data. This flatness will often produce very small effects in our linkage analysis.

The fix for potential level of aggregation problems resides in the blueprinting stage of the project.  Make sure you have a complete inventory of the data that will be available for linkage and have all the relationships among variables drawn out in detail so that mismatches in levels of aggregation will be evident.

2.  Model Specification

When we build a statistical model to predict some outcome, the model is properly specified ONLY if it contains all the variables that influence the outcome.  To the extent it fails to do so, it exhibits under-specification or mis-specification.  For example:

  • We link a customer’s subsequent repurchase (or not) of a brand automobile to her overall satisfaction with her previous experience with that brand. But whether or not a person buys a given brand depends on whether competitors have come out with new models, new advertising, new features, new prices, spiffs or rebates, etc.  Predicting repurchase based ONLY on satisfaction with experience of a single brand misses a lot of the story.
  • We link customer satisfaction with a bank branch to cross-sales data at that branch. But how many IRAs the branch sells to its checking account customers isn’t just a matter of the satisfaction of those customers, it’s also a function of the population served by the branch, the income, employment and life stage of its customer, and the larger economy.

Ideally, we prevent specification error by conducting a blueprinting session, but doing it right will sometimes complicate our lives a little.

For clients selling a long-lasting relationship (telecom, financial services, subscription-based services), build linkage from a relational customer satisfaction study to behavior, using survival analysis.

Many or most linkages, however, connect to business outcomes that are potentially affected by competitive activity.   For these cases we need to model choices rather than overall satisfaction or intent to return ratings.  This means we will need to base our linkage on competitive, comparative relational satisfaction studies that include brand choice dependent variables.

Additional aspects of model specification to consider include:

  • Missing data: Not all respondents experience or can rate all aspects of the potential relationship with the brand
  • Non-linear effects: It may be the influence of attributes on outcomes is not linear, which could complicate modeling
  • Sampling: We need to make sure the sample we base our model on adequately represents the population we want to forecast
  • Timing of prediction: Current measures of performance (satisfaction, etc.) should be more strongly related to future outcomes than to current ones, so capturing the business reality involves building a longitudinal linkage
  • The direction of causality: Sometimes we see that business units with higher levels of satisfaction relate also have lower sales volume – rather than infer that satisfaction causes lower sales, we usually find that lower sales business units are less busy and can offer customers more attention
  • Whether the effects best measured cross-sectionally or not: Cross-sectional analysis focuses on the differences between high and low satisfaction respondents, or between high-performing and low-performing business units; when, as in the previous point, volume itself may be masking the true relationship, running analysis within business units (or at the respondent level) may yield a more accurate measurement.

3.  The Number of Variables Problem

Many of our complex linkages contain lots of variables.  By simple division, if we have 100 of them, they’ll average only 1 percent of the total influence each.  Then, when we show the client the simulator, they’re surprised that changing one of the 100 input variables by half of 1 rating scale point produces an outcome too small to get excited about or to interest senior management.

Like the other potential problems, it’s best to fix this one in the design stage – keep the number of variables in your predictive models as small as you can.

4.  Bolting

“Bolting” is the vivid image describing how multiple sources of survey data connect.  For example, we may connect some transactional surveys to a relational survey and the relational survey to some outcome metric.  The bolting itself contains error, the result of two distinct processes:

  • Transactional surveys and relational surveys take place in different contexts and the questions within them, even if they have identical wording, may be biased by the contexts in which they’re asked. There may be no way around this problem, but it pays to be aware of it and to make the contexts as similar as possible.
  • The samples that comprise transactional and relational surveys may be very different and in complex ways not easily sorted out by simple weighting. To name just one example, many transactional surveys do frequency-based sampling (e.g. frequent customers receive more surveys) while relational surveys often do not.

An analogy to accompany the vivid term might be that differential context and sampling effects mean the bolt is too skinny and wiggles around in the hole, while the nut that attaches to the bolt has the wrong size thread, perhaps in the opposite direction.  As a result, one should try to minimize the amount of bolting built into a linkage model.

5.  Multicollinearity

Multicollinearity happens when two or more variables move up and down together as you look across respondents.  If multicollinearity occurs among three retail attributes “wide aisles,” “well-lit parking lot” and “wide selection,” then the three tend to be all high when any one of them is high and all low when any one of them is low – i.e. they’re highly correlated.  So when we go to simulate an improvement in “wide aisles,” say, if we just simulate it the way it doesn’t happen (all by itself, with no rise in the other two attributes’ scores) we’re punishing the attribute, relative to the way it occurs in nature (or in our data set).

Multicollinearity leads to a cluster of technical problems, most importantly that it results in making the model unstable – it adds error to the importances that result from the model, making the results potentially very misleading (some importances may be a lot smaller than they should be, while others may be a lot larger).

The best solution to the multicollinearity problem is to fix it in the design stage of the study:  carefully pretesting and building of attribute lists to refine their psychometric properties.  Alternatively, we can opt to base the importance weights in our linkage models on valid stated importance measures like best-worst scaling, Q-sort, the method of paired comparisons or constant sum scaling.  The multicollinearity problem goes away if we’re not trying to derive importances from halo-affected, highly correlated predictors.

6.  Respondent Heterogeneity

Respondents differ from one another.  They have different tastes and preferences and hot buttons.  You can imagine that if we were able to do a separate importance model for each respondent, that people would have different weights for the different attributes – that’s what respondent heterogeneity means.

When we run a derived importance model, we get a set of attribute importance weights that represents the average importances across all respondents.  Of course this average can hide some pretty important differences; in fact, the average may represent no one respondent at all.

The average respondent has a set of importances, possessed by and reflective of no one single respondent (wide aisles is the most important attribute to no one, though it is the most important overall.  In linkage models applying these average importances to every respondent in the study predictions will fail to fit for a lot of respondents and the overall prediction suffers.

There are several ways to address this problem of heterogeneity.

  • If the linkage involves connecting individual respondent level data to individual level outcomes, consider using valid stated importance methods (best-worst scaling, the method of paired comparisons, Q-sort or constant sum scaling). When done properly, stated importances have been found to have excellent predictive validity.  Stated importance measures come at a cost in questionnaire real estate, however, so in many cases they may not be the best solution.
  • Examine segmenting variables to see which have a moderating effect on regression coefficients, then incorporate them into a moderated regression analysis
  • Do the same more formally using a mix model
  • Create respondent preference segments through applying “reverse segmentation” with importances coefficients as basis variables
  • RAID – a tree-based segmentation (think CART or CHAID) that splits the tree on the basis of the improvement in regression coefficients caused by the effects of the various segmenting variables.

7.  Bad Luck

Unlike other kinds of research, which are guaranteed to succeed conjoint and pricing choice experiments (for example and customer satisfaction, loyalty and brand choice modeling), there is NO GUARANTEE that the variables one is trying to connect in linkage will be statistically related.  Minor failures to accommodate the pitfalls and complicating factors above, or even plain bad luck, can prevent a linkage model from getting off the ground.

Think, before…

So, the next time your team is about to enter a linkage engagement or a discussion on a linkage proposal, don’t walk blindly into a bad scenario. Consider these critical vulnerabilities and don’t leave anything to chance.