Point of View
I stand by that argument, and we have many examples of forward-looking brands adopting this approach for their research programs. However, we still see far too many brands clinging to research practices that are out of date. For example, the default approach remains to ask each survey respondent all possible questions. Building research in this manner is convenient and comfortable, but it does not encourage consideration of alternative sources of insight.
This hesitation should not be surprising; well established behaviors and practices are hard to change. As any observer of human behavior will tell you, the best predictors of an individual's future decisions are his or her past decisions. In other words, you can't teach old dogs new tricks. When we examine our actions and decisions as researchers who study consumers, brands, and marketing effectiveness, we see that far too often we are still acting like the proverbial old dogs. I say this not to offend, but rather to ignite a movement throughout the research community to revisit our first principles of design and data quality. Liberated research can only deliver to its highest potential and promise if we actually liberate ourselves from practices that are not working. Otherwise, we risk bringing about our own obsolescence.
Let's talk about specifics. What do we need to do in order to liberate research?
Of course, giving up historical trends from a long-built and well-invested data asset is a tough pill to swallow. It is difficult to explain to executives that we are making changes to a research instrument and losing comparability to history, even though we are quite good at devising statistical protocols to preserve trends and can make the old data act like the new or calibrate new results to match the old. The fundamental point is that we must opt to change. Trend maintenance should not be our first principle of design; we should abandon instruments and approaches that may have been suboptimal in the first place.
Millward Brown's meaningfully different framework for understanding brand equity is a great example of the benefits of this kind of thinking. First, we designed the framework so it can be asked quickly, averaging about three minutes per consumer, and we only ask the questions that consistently link to behavioral success across categories and markets. Second, we designed the question format to mirror the competitive context that consumers experience in their daily lives. This is done through a design called "associative scale and rank." For the key dimensions in the model, the consumer positions each brand on a 0-10 scale and overlays all the brands on that scale to give a relative ranking. This provides the sensitivity of leveraging a full scale for each dimension while being much more consumer friendly and engaging than a separate assessment of each brand, so we get the information we need quickly and accurately. Most importantly, our brand measurement framework serves as connective tissue across research solutions and data assets. That's the real beauty of the new model; it is designed in a short, engaging, repeatable, and standardized manner that can be implemented across a range of research scenarios.
This brings us back to why we want researchers to embrace the opportunity the modern data landscape provides and liberate themselves from lengthy, single source surveys. To suggest that this is a necessary step to effectively utilize mobile devices for surveys is valid but misses the essential point: shorter, focused surveys are better surveys. If we properly frame business problems and think about them from a research-program mindset, then we will be empowered to enjoy the benefits of liberated research. What business problems can only be answered by having all the information from the same individual in one survey? What learning objectives can be better addressed across a suite of connected research solutions? Which questions are redundant and repetitive?
Of course, we raise the stakes when we remove the security blanket of asking each consumer about all the pieces of the business puzzle. This requires clarity of planning and purpose, but it is a challenge that we should embrace. The evidence shows that we are kidding ourselves if we think that analyses of long surveys with poor-quality data can provide accurate stories and actionable recommendations.
That is why I see this as a manifesto for change for the research community. We know what works—shorter surveys that respect consumers' limited availability in a time-pressed world, research tools that engage consumers in a dialogue using everyday language, and research solutions that encourage participation, not frustration.
We can no longer reasonably claim, "I don't want to rock the boat," as an excuse for not rethinking research designs that don't meet these criteria. The boat has already been rocked.
As an industry, we talk a lot about moving from backward-looking insights to research with foresight and forward-looking actionability. I endorse and hope to amplify those goals in this POV. With shorter surveys run as part of a larger research program that includes both big and small data assets, we will be well positioned to deliver on those goals. But if we don't actually speed up our implementation of shorter surveys and move away from bloated, historical survey designs, our talk will simply be hot air.