Books

New from Nigel Hollis

Buy the Book


How to Create and Develop Lasting Brand Value in the World Market
by Nigel Hollis
theglobalbrandonline.com

Recent Blog Posts

Wednesday, April 16, 2014

If media is like Humpty Dumpty how do you build your brand?

Monday, April 14, 2014

Even cheaper brands need to differentiate

Wednesday, April 09, 2014

Facebook confesses it needs TV

Monday, April 07, 2014

Make it describable

Wednesday, April 02, 2014

Does making exciting ads reduce risk?

Monday, April 02, 2012

Why I believe big data needs a little help

In my previous post on this topic, I suggested that big data runs the risk of inducing “marketing myopia." My basic point was that the patterns yielded by analysis of big databases are just as subject to the biases and beliefs of the analyst and decision maker, as any other data set. But there is no doubt that many people find the promise of big data compelling. Why is that?

Line graph showing data

To my mind, the allure of big data is simple: big data promises certainty. If you are certain of the finding’s veracity, you can act without hesitation. Implicit in that certainty, is a faith in the scientific method that the analytic techniques applied to a problem are unbiased and impartial. A Merriam-Webster definition of scientific method is:

...principles and procedures for the systematic pursuit of knowledge involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses.

But therein is the problem. All science is directed by beliefs; we just dress them up and call them hypotheses. Put your mind to it, and you can prove pretty much anything to your own satisfaction (if not that of everyone else’s). Unless you start your analysis with a set of hypotheses based on good evidence of how the world works, then people really think you risk skewing the analysis and misinterpreting the results. 

A recent article in The New York Times on big data, also points to the potential of misinterpretation:

These models, like metaphors in literature, are explanatory simplifications. They are useful for understanding, but they have their limits. A model might spot a correlation and draw a statistical inference that is unfair or discriminatory, based on online searches, affecting the products, bank loans and health insurance a person is offered, privacy advocates warn.

Despite the caveats, there seems to be no turning back. Data is in the driver’s seat. It’s there, it’s useful and it’s valuable, even hip.

And I agree with that conclusion. Big data can be incredibly useful and valuable, but only if used wisely. And that is where I believe little data can be of help. Little data – qualitative, survey and neuro research techniques – can provide a useful frame of reference for the findings from big data. They can be thought of as providing independent insight into what motivates people to behave the way they do, counteracting the biases and beliefs that analysts and decision makers bring to big data. 

Of course, the catch-22 is that little data is also subject to bias. But when the findings from two completely different approaches converge, we can be relatively certain that the results are valid and meaningful, and that they provide a basis on which to act with confidence.

So what do you think? Does big data need the help of little data? Or do new techniques obviate the need for independent verification? Please share your thoughts. 

(average: 5 out of 5)


Email this post to a friend



This entry was posted on Monday, April 02, 2012 and is filed under Research. You can leave a response.

4 Responses

  1. Monday, April 02, 2012

    Ed C

    Little data can be helpful, but I think the bigger need is for independant verification. Let's say we have all the data in the world (see my note below for who will have it). Would the ipad have been invented? Could we prove the existence of God? Would we even be able to predict the next flavor of Coke? Call it independant verification or just a brain, I don't think data alone helps much.

    In this article, titled "Just the Facts. Yes. All of Them." they talk about one who wants ALL the data. I'm not sure how useful that will be, alone...
    http://www.nytimes.com/2012/03/25/business/factuals-gil-elbaz-wants-to-gather-the-data-universe.html?pagewanted=all

  2. Monday, April 02, 2012

    Erik du Plessis

    When I was a systems analyst I attended a conference run by IBM where they told of an experiment they with practicing MBA managers being given a problem and a whole battery of programmers available to provide them with the data they asked for. The managers worked independent.

    There are two types of managers: Type1:Those that ask very little information and make a decision quite quickly
    Type 2: Kept asking for more information.

    When the decisions were analysed the Type 1's generally made better decisions.

    When asked how confident they were that they made the right decision the Type 2's where much more confident.

    Maybe the role of big data is not to improve the decision, but to make the manager feel better (and even some CYA).

  3. Tuesday, April 03, 2012

    David

    I usually work the other way around. First qual to find hypothesis etc, and then confirm (or dismiss) the qual findings through quant/statistical analysis.

    I would say this reduces the risk of myopia when carrying out surveys. Also, this enables you to use the target groups associations/tonality etc rather than that of the marketing department in the quant. When quant is carried out with no or little qual research before, we also see that you run a great risks of assuming that consumers are rational, missing out on emotional associations and arguments.

    I do however see your point. Especially since most quantitative surveys carried out are not that well crafted to be honest. Even so when it comes to large global companies. But isn't the main problem here that people working with data/surveys etc often lack competence when it comes to branding? 

    With unlimited budgets, I guess: desktop -> qual -> quant -> qual would be ideal for us.

    Thanks for the most inspirational blog out there.

  4. Thursday, April 05, 2012

    Nigel

    Thanks for the comments.
    David, I think you are correct. Any analysis conducted without a good understanding of the issue at hand risks being misleading. One of the concerns I have is that "machine learning" makes it possible to identify patterns in data that people then try to interpret, rather than the more classical approach that you propose of developing a hypothesis first and then seeking to test it. Just because a pattern exists does not mean we can easily understand the reasons behind that pattern. 

Leave a Reply