| September 07, 2015
Time is short for all of us. Survey respondents do not want to answer lengthy questionnaires any longer (if they ever did). But contrary to the most optimistic beliefs of the big data fanboys, sometimes you just need to ask people a few questions.
If you combine attitudinal data with behavioral data from the same people, it quickly becomes apparent that attitudes do matter. People, a lot of the time, do act in line with their beliefs, particularly when it comes to paying a price premium for a brand. Look at that data over time, and you come to realize that brands which people believe are meaningfully different are the ones that return good financial results over the long-term.
So attitudes do matter, but collecting those attitudes in a survey has typically been a lengthy process. Even today some surveys still exceed 20 minutes, with huge consequences for completion rates and the cost to the client. To quote one survey respondent,
“If the survey is too long, you will click anything just to finish. This makes the survey worthless. You would think they would know this.”
How can we fix this problem? There are at least three ways that Millward Brown has risen to this challenge.
First, do not ask the question if that data can be collected elsewhere, for instance, from sales panels,search queries or social media. But you have to be confident that the replacement data is valid. One reason to continue asking one or two duplicative questions in your survey is to make sure the different data sources are telling you the same thing, or to help explain why when they do not.
Second, focus in on real key performance indicators; the ones that predict and, better yet, lead behavior. Millward Brown’s Meaningfully Different Framework has reduced the number of questions you need to ask to monitor attitudinal brand health to five, at the same time improving the link with sales, price paid and potential to grow.
Third, do not ask everyone about every brand. Adaptive Brand Sets is a technique that allows a questionnaire to serve brand lists dynamically. A stratified sampling algorithm enables us to both cover a whole category and also reduce the brands asked to a list that is relevant, representative, and easily answerable. The backend imputation then allows us to do the respondent-level analyses as well as removing any ‘brand fan’ bias. Testing of this approach has proven that respondents not only like the survey process more, they provide richer information with stronger discrimination between brands.
Like it or not I suspect that surveys are going to be with us for a long time yet. They may be served dynamically, in micro segments and across different devices, but if you want to get insight into what is going on in people’s heads sometimes you really do need to ask. So what do you think is the future of surveys? Please share your thoughts.