June 2016
GETTING DEEPER “IN THE MOMENT”
06/06/16 14:30 Filed in: Quantitative Research
Anne-Marie McCallion from the numbers lab talks about getting more from quantitative studies
Clients constantly challenge their agencies to go further, to provide insights which really and truly deliver on what’s on the minds of their customers. As one recent client brief so eloquently put it, “we don’t want you to spend time regurgitating our objectives back at us, and talking about sample structure, but want you to provide a methodology that is going to capture attention”. While those old tried and tested methodologies still form the core of what agencies do to get to the answer, new and exciting technologies that can enhance these are becoming more prevalent.
The best of these new approaches have a common theme – the ability to collect “in-the-moment” feedback. In essence this is pure and simple but, when correctly executed, it allows us to deliver deeper, real world insights to our clients, with recommendations that steer them forward.
Some of our most considered digital research technologies include:
1. Adding a video element to our studies: Building a video element into online surveys elevates the typical open-text verbatim comments and increases engagement for respondents, improving the quality of feedback. Bringing the faces of consumers into the room at a client debrief brings the findings to life, and increases engagement for stakeholders. This technology can be integrated into everyday tracking studies to gain rich brand insight or into ad testing pieces to collect live and in-the-moment responses.
2. Using facial expressions to accurately predict success: One step further than using videos within surveys, facial coding allows us to read the emotions of survey respondents in the moment. While typical survey diagnostics allow us to collect feedback post-viewing, facial coding goes deeper, to pinpoint the initial emotional connection respondents have to a piece of stimulus. This, in turn, also allows us to understand the reactions respondents will not or cannot vocalise.
3. Collecting passive data: There are often times when we expect too much from our respondents. In times of increasing market and advertising clutter, spontaneous and prompted recollection is difficult. Discreetly collecting passive data (with permission, of course) from our respondents’ laptops and devices gives us the ability to measure actual behaviour and deliver more robust insight to our clients.
The potential benefits are clear, but such technology should be approached with care. Using additional methodologies for the sake of it doesn’t help anyone and only serves to mismanage expectations (and budget) in the mind of your clients. The challenge here lies in the careful curation of a methodology that uses the best of the traditional methods, alongside carefully selected, complementary methodologies to move beyond the stated and more towards actual customer feedback.
Couple this approach with statistical analysis tools like Conjoint, MaxDiff or Kano analysis, it means that we can place the onus on our analysis of the data, lightening the cognitive load on survey respondents. By doing this, you are allowing them to think about answering honestly, and to not have to indulge in complex thinking to get to an answer they think is right.
After all, isn’t that we are client partners in the first place?
Anne-Marie McCallion is Associate Director at the numbers lab. She loves a challenge and believes that research is not a one-size-fits all exercise. She is an expert at reading between the lines and has a keen eye for detail.
www.thenumberslab.co.uk
Clients constantly challenge their agencies to go further, to provide insights which really and truly deliver on what’s on the minds of their customers. As one recent client brief so eloquently put it, “we don’t want you to spend time regurgitating our objectives back at us, and talking about sample structure, but want you to provide a methodology that is going to capture attention”. While those old tried and tested methodologies still form the core of what agencies do to get to the answer, new and exciting technologies that can enhance these are becoming more prevalent.
The best of these new approaches have a common theme – the ability to collect “in-the-moment” feedback. In essence this is pure and simple but, when correctly executed, it allows us to deliver deeper, real world insights to our clients, with recommendations that steer them forward.
Some of our most considered digital research technologies include:
1. Adding a video element to our studies: Building a video element into online surveys elevates the typical open-text verbatim comments and increases engagement for respondents, improving the quality of feedback. Bringing the faces of consumers into the room at a client debrief brings the findings to life, and increases engagement for stakeholders. This technology can be integrated into everyday tracking studies to gain rich brand insight or into ad testing pieces to collect live and in-the-moment responses.
2. Using facial expressions to accurately predict success: One step further than using videos within surveys, facial coding allows us to read the emotions of survey respondents in the moment. While typical survey diagnostics allow us to collect feedback post-viewing, facial coding goes deeper, to pinpoint the initial emotional connection respondents have to a piece of stimulus. This, in turn, also allows us to understand the reactions respondents will not or cannot vocalise.
3. Collecting passive data: There are often times when we expect too much from our respondents. In times of increasing market and advertising clutter, spontaneous and prompted recollection is difficult. Discreetly collecting passive data (with permission, of course) from our respondents’ laptops and devices gives us the ability to measure actual behaviour and deliver more robust insight to our clients.
The potential benefits are clear, but such technology should be approached with care. Using additional methodologies for the sake of it doesn’t help anyone and only serves to mismanage expectations (and budget) in the mind of your clients. The challenge here lies in the careful curation of a methodology that uses the best of the traditional methods, alongside carefully selected, complementary methodologies to move beyond the stated and more towards actual customer feedback.
Couple this approach with statistical analysis tools like Conjoint, MaxDiff or Kano analysis, it means that we can place the onus on our analysis of the data, lightening the cognitive load on survey respondents. By doing this, you are allowing them to think about answering honestly, and to not have to indulge in complex thinking to get to an answer they think is right.
After all, isn’t that we are client partners in the first place?
Anne-Marie McCallion is Associate Director at the numbers lab. She loves a challenge and believes that research is not a one-size-fits all exercise. She is an expert at reading between the lines and has a keen eye for detail.
www.thenumberslab.co.uk
POLLS GOT IT WRONG (AGAIN) BUT DON’T LOSE FAITH IN QUANTITATIVE RESEARCH
06/06/16 14:16 Filed in: US Elections
Jim Mann from the numbers lab looks at the results from the US elections
Like many, I woke unusually early on Wednesday and reached nervously for my mobile phone. It was US election night and I was eager to see if, from my perspective, crisis had been averted or the world really had gone mad. Before I had a chance to tap my favourite news app I noticed a message from my brother: ‘Another resounding victory for the polls bruv!’ Detecting sarcasm (I’m smart like that) I knew this could only mean one thing. Sure enough, Trump was well on course to a victory that nobody, least of all the pollsters, was anticipating. For the third time in eighteen months (following the UK general election and EU referendum) the pollsters had got it wrong!
In the period since May 2015, I’ve had countless debates with polling sceptics like my brother. His, fiercely articulated, view is that polling is not simply inaccurate, it also has the potential to sabotage itself. He’s not alone in this belief. Behavioural economics shows that people generally wish to follow the herd. Therefore, a poll showing that the majority think in a particular way is likely to influence, albeit subtly, what they themselves believe. Furthermore, there are those that cite the possibility that polls could impact rates of voter turnout. After all, why bother to turn out to vote if the polls have created a strong belief that your favoured candidate is either assured of victory or has no chance of winning?
Polling, when first popularised by George Gallup in the 1930s, was hailed for the positive contribution it made to the democratic process. Gallup himself was, understandably, steadfast in this belief. Elmo Roper, another pioneer of the public opinion poll, described it rather hyperbolically as “the greatest contribution to democracy since the introduction of the secret ballot”. But there have always been critics, and the anti-polling arguments inevitably gain traction when the pollsters get it wrong. Failure is not a modern phenomenon either. Immediately prior to the 1948 election George Gallup predicted that Dewey would beat Truman in the election and stated, unwisely as it turns out, “We have never claimed infallibility, but next Tuesday the whole world will be able to see down to the last percentage point how good we are”. Dewey lost. The anti-polling lobby had a field day.
So criticisms of polling aren’t new and, let’s be honest, they would remain niche concerns if the polls were accurately predicting results. But they’re not and on the back of a series of high profile failures it’s increasingly common to deride polling as a “devalued pseudo-science conducted by charlatans”. Yep, my brother again. I hate to give him the last word so, in order to provide a flavour of wider opinion, I’ll quote the Guardian’s post-election editorial instead. “The opinion polls and the vaunted probability calculus rarely trended in his (Trump’s) direction; both are discredited today.”
The purpose of this blogpost is not to defend political polling; I have my own concerns in that direction and it’s undeniable that the work of pollsters is becoming harder, due to a combination of methodological issues and a more fluid, less predictable, political landscape. However, for the sake of fairness I’d like to mention two things, neither of which is intended to exonerate the practice.
First, most polls reflect public sentiment within a nationally representative sample. In the main, but not exclusively, the polls conducted immediately prior to the election found that, by a relatively small margin, more Americans intended to vote for Clinton than Trump. In this they were correct. At the time of writing, the figures show that 59,814,018 Americans voted for Clinton whilst 200,000 fewer (59,611,678) voted for Trump. However, due to the distribution of votes and the vagaries of the US political system, this translated into 279 Electoral College votes for Trump and 228 for Clinton.
Second, most polls conducted by reputable polling organisations produced results that placed the result well within the margin of error. “What’s that?” I hear you ask. Well, tucked away at the end of most reports based on a public opinion poll will be a small note about margin of error. This margin will differ depending on the number of people interviewed for the poll but, for a standard sample size of 1,000, the margin of error is +/- 3.5%. This essentially means that if the poll results show that Clinton is projected to win 47% of votes, the reality is likely to be somewhere between 50.5% and 43.5%. Within this context, the result of the election was well within the margin of error of most polls. It wasn’t so much the polls that got it wrong, it was the reporting of the polls that failed to sufficiently stress that the result really was too close to call. But people don’t like uncertainty so these boring, statistical caveats tend to get overlooked.
OK, but I said this blog wasn’t designed to defend polling. So what is it about? Well, I don’t feel the need to defend polling because I’m not a pollster. However, I am a market researcher working with quantitative surveys and, what concerns me, is the fear that growing scepticism around polling will negatively impact trust in all forms of numbers-based research into public attitudes. Maybe I’m just a worrier and people are perfectly able to distinguish between different forms of survey based research. However, my own experience suggests that isn’t always the case.
In May 2015 I was working at the Guardian. The Guardian has invested significantly in data journalism over recent years and coverage and analysis of polls was given a high degree of prominence in the run up to the UK general election. At the Editorial conference, held the day after the election, the mood was subdued. When the conversation turned to the failure of the polls some journalists questioned the prominence given to polling numbers, especially as those numbers didn’t chime with their instincts and the evidence of their own, on the ground, experiences. The upshot was a policy decision, only recently reversed, that editorial coverage of polling should be suspended. The coverage of polls in the run-up to the US election was reported under the banner ‘Sceptical polling’, which gives a pretty good indication of the mood around the organisation.
As Head of Consumer Insight at The Guardian, a key element of my role was to advocate for use of consumer research and promote evidence-based strategic decision-making. My internal clients were ranged on a spectrum that ran from research enthusiasts to rejecters. This latter group, a minority it should be said, believed there was little to gain from engaging with research. The great polling disaster of 2015 provided a tailor-made reason to disengage. After all, research had been shown, in the most public way imaginable, to be unreliable and wrong! Hadn’t it?
I’m sure the Guardian is like most organisations in having research stakeholders ranging from enthusiasts to sceptics. To the latter group I would make this plea; don’t conflate political polling and other forms of quantitative market research and do not deny yourself and your business an incredibly powerful, consistently proven aid to decision making simply because political polling has been shown to not be a perfectly accurate crystal ball. As mentioned, polling isn’t quite as inaccurate as some would have you believe. Furthermore, the stakes are simply much higher for polling: A couple of percentage points either way (generally within the margin of error, remember) is the difference between two diametrically-opposed outcomes and the profound repercussions associated with that. In contrast, if a representative survey of consumers in a particular sector suggests that awareness of your brand currently stands at 34% whilst that of a competitor is 64%, does it really make a huge difference to the decisions your company will take if the reality is a couple of percentage points either side?
Of course, some decisions do require a higher degree of accuracy. In these instances, market researchers have two huge advantages over pollsters. We can increase the number of people interviewed in the study, thus reducing the margin of error and increasing confidence levels. We can also utilise robust sampling techniques such as random probability sampling. Generally speaking, neither of these options is available to pollsters because they are simply too time consuming. Pollsters are required to provide an almost instantaneous reading of public sentiment, before new events have a chance to change it, and anything that slows that process is, by necessity, discarded. If pollsters were given the freedom to use these tools, it’s likely they would provide far more accurate predictions. How do we know? Well, following the 2015 general election most polling companies conducted re-contact surveys with pre-election poll respondents to try and understand what went wrong. What they discovered was that, even when conducting post-event research, they were unable to accurately replicate the result. The inquiry conducted by the Polling Council of Great Britain concluded that the reason was their use of (attitudinally) unrepresentative samples drawn from panels and that a random probability sampling approach (that gives every member of a target population an equal chance of participating in the study) would counteract the problem. Tellingly, the survey that best replicated the election result was the British Social Attitudes (BSA) survey conducted by NatCen Social Research. Need I say that BSA is based on a large sample (3,000) and utilises random probability sampling?
I’ve rambled on too long and exceeded my word count limit by a distance so I’ll finish by saying this: The great jazz musician, Duke Ellington (or possibly Richard Strauss, it’s disputed) is quoted as saying “there are only two types of music: good and bad”. Market research is much the same. When done properly it is an incredibly powerful diagnostic and forecasting tool that can provide a highly accurate picture of consumer sentiment as it currently exists. Pollsters, through no fault of their own, are sometimes unable to do it.
Researchers, however, can and do. We hope you can see from this that the numbers lab fall in to the former (good) group so, if you want our thoughts on anything, please just get in touch
Like many, I woke unusually early on Wednesday and reached nervously for my mobile phone. It was US election night and I was eager to see if, from my perspective, crisis had been averted or the world really had gone mad. Before I had a chance to tap my favourite news app I noticed a message from my brother: ‘Another resounding victory for the polls bruv!’ Detecting sarcasm (I’m smart like that) I knew this could only mean one thing. Sure enough, Trump was well on course to a victory that nobody, least of all the pollsters, was anticipating. For the third time in eighteen months (following the UK general election and EU referendum) the pollsters had got it wrong!
In the period since May 2015, I’ve had countless debates with polling sceptics like my brother. His, fiercely articulated, view is that polling is not simply inaccurate, it also has the potential to sabotage itself. He’s not alone in this belief. Behavioural economics shows that people generally wish to follow the herd. Therefore, a poll showing that the majority think in a particular way is likely to influence, albeit subtly, what they themselves believe. Furthermore, there are those that cite the possibility that polls could impact rates of voter turnout. After all, why bother to turn out to vote if the polls have created a strong belief that your favoured candidate is either assured of victory or has no chance of winning?
Polling, when first popularised by George Gallup in the 1930s, was hailed for the positive contribution it made to the democratic process. Gallup himself was, understandably, steadfast in this belief. Elmo Roper, another pioneer of the public opinion poll, described it rather hyperbolically as “the greatest contribution to democracy since the introduction of the secret ballot”. But there have always been critics, and the anti-polling arguments inevitably gain traction when the pollsters get it wrong. Failure is not a modern phenomenon either. Immediately prior to the 1948 election George Gallup predicted that Dewey would beat Truman in the election and stated, unwisely as it turns out, “We have never claimed infallibility, but next Tuesday the whole world will be able to see down to the last percentage point how good we are”. Dewey lost. The anti-polling lobby had a field day.
So criticisms of polling aren’t new and, let’s be honest, they would remain niche concerns if the polls were accurately predicting results. But they’re not and on the back of a series of high profile failures it’s increasingly common to deride polling as a “devalued pseudo-science conducted by charlatans”. Yep, my brother again. I hate to give him the last word so, in order to provide a flavour of wider opinion, I’ll quote the Guardian’s post-election editorial instead. “The opinion polls and the vaunted probability calculus rarely trended in his (Trump’s) direction; both are discredited today.”
The purpose of this blogpost is not to defend political polling; I have my own concerns in that direction and it’s undeniable that the work of pollsters is becoming harder, due to a combination of methodological issues and a more fluid, less predictable, political landscape. However, for the sake of fairness I’d like to mention two things, neither of which is intended to exonerate the practice.
First, most polls reflect public sentiment within a nationally representative sample. In the main, but not exclusively, the polls conducted immediately prior to the election found that, by a relatively small margin, more Americans intended to vote for Clinton than Trump. In this they were correct. At the time of writing, the figures show that 59,814,018 Americans voted for Clinton whilst 200,000 fewer (59,611,678) voted for Trump. However, due to the distribution of votes and the vagaries of the US political system, this translated into 279 Electoral College votes for Trump and 228 for Clinton.
Second, most polls conducted by reputable polling organisations produced results that placed the result well within the margin of error. “What’s that?” I hear you ask. Well, tucked away at the end of most reports based on a public opinion poll will be a small note about margin of error. This margin will differ depending on the number of people interviewed for the poll but, for a standard sample size of 1,000, the margin of error is +/- 3.5%. This essentially means that if the poll results show that Clinton is projected to win 47% of votes, the reality is likely to be somewhere between 50.5% and 43.5%. Within this context, the result of the election was well within the margin of error of most polls. It wasn’t so much the polls that got it wrong, it was the reporting of the polls that failed to sufficiently stress that the result really was too close to call. But people don’t like uncertainty so these boring, statistical caveats tend to get overlooked.
OK, but I said this blog wasn’t designed to defend polling. So what is it about? Well, I don’t feel the need to defend polling because I’m not a pollster. However, I am a market researcher working with quantitative surveys and, what concerns me, is the fear that growing scepticism around polling will negatively impact trust in all forms of numbers-based research into public attitudes. Maybe I’m just a worrier and people are perfectly able to distinguish between different forms of survey based research. However, my own experience suggests that isn’t always the case.
In May 2015 I was working at the Guardian. The Guardian has invested significantly in data journalism over recent years and coverage and analysis of polls was given a high degree of prominence in the run up to the UK general election. At the Editorial conference, held the day after the election, the mood was subdued. When the conversation turned to the failure of the polls some journalists questioned the prominence given to polling numbers, especially as those numbers didn’t chime with their instincts and the evidence of their own, on the ground, experiences. The upshot was a policy decision, only recently reversed, that editorial coverage of polling should be suspended. The coverage of polls in the run-up to the US election was reported under the banner ‘Sceptical polling’, which gives a pretty good indication of the mood around the organisation.
As Head of Consumer Insight at The Guardian, a key element of my role was to advocate for use of consumer research and promote evidence-based strategic decision-making. My internal clients were ranged on a spectrum that ran from research enthusiasts to rejecters. This latter group, a minority it should be said, believed there was little to gain from engaging with research. The great polling disaster of 2015 provided a tailor-made reason to disengage. After all, research had been shown, in the most public way imaginable, to be unreliable and wrong! Hadn’t it?
I’m sure the Guardian is like most organisations in having research stakeholders ranging from enthusiasts to sceptics. To the latter group I would make this plea; don’t conflate political polling and other forms of quantitative market research and do not deny yourself and your business an incredibly powerful, consistently proven aid to decision making simply because political polling has been shown to not be a perfectly accurate crystal ball. As mentioned, polling isn’t quite as inaccurate as some would have you believe. Furthermore, the stakes are simply much higher for polling: A couple of percentage points either way (generally within the margin of error, remember) is the difference between two diametrically-opposed outcomes and the profound repercussions associated with that. In contrast, if a representative survey of consumers in a particular sector suggests that awareness of your brand currently stands at 34% whilst that of a competitor is 64%, does it really make a huge difference to the decisions your company will take if the reality is a couple of percentage points either side?
Of course, some decisions do require a higher degree of accuracy. In these instances, market researchers have two huge advantages over pollsters. We can increase the number of people interviewed in the study, thus reducing the margin of error and increasing confidence levels. We can also utilise robust sampling techniques such as random probability sampling. Generally speaking, neither of these options is available to pollsters because they are simply too time consuming. Pollsters are required to provide an almost instantaneous reading of public sentiment, before new events have a chance to change it, and anything that slows that process is, by necessity, discarded. If pollsters were given the freedom to use these tools, it’s likely they would provide far more accurate predictions. How do we know? Well, following the 2015 general election most polling companies conducted re-contact surveys with pre-election poll respondents to try and understand what went wrong. What they discovered was that, even when conducting post-event research, they were unable to accurately replicate the result. The inquiry conducted by the Polling Council of Great Britain concluded that the reason was their use of (attitudinally) unrepresentative samples drawn from panels and that a random probability sampling approach (that gives every member of a target population an equal chance of participating in the study) would counteract the problem. Tellingly, the survey that best replicated the election result was the British Social Attitudes (BSA) survey conducted by NatCen Social Research. Need I say that BSA is based on a large sample (3,000) and utilises random probability sampling?
I’ve rambled on too long and exceeded my word count limit by a distance so I’ll finish by saying this: The great jazz musician, Duke Ellington (or possibly Richard Strauss, it’s disputed) is quoted as saying “there are only two types of music: good and bad”. Market research is much the same. When done properly it is an incredibly powerful diagnostic and forecasting tool that can provide a highly accurate picture of consumer sentiment as it currently exists. Pollsters, through no fault of their own, are sometimes unable to do it.
Researchers, however, can and do. We hope you can see from this that the numbers lab fall in to the former (good) group so, if you want our thoughts on anything, please just get in touch