Wednesday, December 19, 2012
Why show faces to understand emotion when you can watch them instead?
Qualitative researchers have long had the advantage of being able to watch people’s faces to assess their instinctive reaction to stimuli, and then see whether that reaction matches what they actually say.
But when Millward Brown partnered with Affectiva in January 2012, and integrated Affectiva’s Affdex facial coding software with Millward Brown’s Link™, this capability was extended to quantitative pre-testing. This innovation known as Link™ + Facial Coding was recognized last week with the MRS/ASC award for Technology Effectiveness.
A couple of times in recent years, I have been asked why we do not show respondents’ facial expressions in order to help people identify how they feel. In reality, there is little to be gained by showing a picture, and whether you use pictures or words, the response you get is still a cognitive one. People have to identify the emotion as presented and then judge whether they feel the same way or not. And, of course, you get an ex-post evaluation, not an ongoing assessment. Particularly for the purposes of diagnosing response to video advertising, it is far better to watch people’s faces and assess what emotion they are feeling over time.
The coding of facial expressions has always offered a potential solution to this need, but has been a labor-intensive, manual task, which has limited its application in market research. With the advent of Link™ + Facial Coding, that situation has changed, allowing us to directly assess emotional (and cognitive) responses without relying solely on verbal report, and simply requires a webcam and consent to be filmed.
This means that biometric measures can be gathered from the same people who complete surveys, at quantitative scale, with minimal impact on the survey experience (a distinct limitation of many neuromarketing techniques). It also yields measures which are intuitive for clients (e.g. ‘Did people smile when they should?’), which can be readily combined and cross-analyzed with established survey metrics. For instance, we can use the data to understand the emotional triggers in a campaign, or the moments at which the audience switch off. It also highlights which creative devices work well in a given market, or which claims are more powerful than others. All of this informs ad edits, wear-in and wear-out, campaign longevity and ultimately, allows clients to ensure repeatable success in their marketing.
The use of Affectiva’s Affdex technology was pioneered in 2011 and introduced to our clients as Link™ + Facial Coding earlier this year. Since then, over 350 facial coding tests have been conducted, the fastest uptake of a new method by Millward Brown clients in recent history. Response to the results has also been positive. Here is an endorsement from one of our clients in India:
In one recent case, viewers responded diversely on survey metrics to a new and challenging creative device. Analyzing the facial responses of viewers allowed us to see that there was some post-rationalization occurring on the survey, and that there was a positive response to the advertising idea underlying respondents’ answers. That insight is helping us develop the creative further with changes highlighted by both the survey and facial metrics. We generated more insightful and usable findings by combining both methods.
Another advantage of the technique is that it is eminently scalable. Provided there is Internet access, a test can be run anywhere, a fact borne out by upcoming tests in India, China and Nigeria.
Congratulations to Graham Page and the Consumer Neuroscience Practice for bringing us a great innovation and a well-deserved award.
This entry was posted on Wednesday, December 19, 2012
and is filed under Research.
You can leave a response.