Navigator logo
Perspectives logo

Perspectives | Issue 6

Navigator’s folio of ideas, insights and new ways of thinking

Poll Vaulting: The future of political campaign research

January 1, 2017
Anne Kilpatrick
Anne Kilpatrick | Managing Principal
LinkedIn icon

In the aftermath of the U.S. election, pollsters are facing some tough scrutiny about their relevance. But not all polls are created equal, notes Anne Kilpatrick.

One of many unexpected outcomes of the 2016 U.S. presidential campaign was the failure of most research polls to accurately predict the election of Donald Trump.

This was all the more striking because, in an intensely emotional political race characterized by a deliberate disregard for truth, polls offered a familiar reference point. In the face of chaos, the enduring ability to measure, analyze and process data was one of the few ways to superimpose order.

Furthermore, in a frantic scrabble to meet a rapacious demand for new information about the campaign, media put forward any and all polls.

But not all polls are created equal.

We know that the voter population has become increasingly fragmented, and, in the aftermath of the U.S. election, critics point to the inability of pollsters to address this issue as a driver of their seeming inability to get it right. The criticisms focus on non-representative sampling of potential voters —noting that the voter population is more diverse culturally and that pollsters are under-representing or over-representing certain ethnic or racial groups in their polling, or that the more diverse set of channels by which pollsters need to reach voters (landline telephones, cellular phones and online) is presenting challenges in reaching a representative sample of voters. These challenges can be generalized to any election or referendum polling, and a lack of rigour in addressing these issues may indeed be a factor in determining why some pollsters just aren’t getting it right.

Those pollsters who were able to project a Trump victory did not rely solely on traditional questions about voting intentions or previous voting behaviour, and on demographic characteristics to project outcomes (e.g., likely voting intention if an election were held today; favourability ratings of a candidate; previous voting behaviour, gender, income, education, religion and race). While these questions and demographic characteristics often point to a potential outcome, they do not capture the ultimate insight—voter sentiment. And sentiment is the key word here.

The polls that appear to have been closer to the mark delved into voter sentiment by examining engagement with candidates and how that affected voters’ likelihood to turn out at the ballot box. These pollsters actively sought to measure the impact of the “undecideds”—the almost 15 per cent of voters who likely determined the outcome of the election. These voters helped make voter turnout patterns unpredictable, including the greater than expected turnout of Trump supporters in swing states, and lower than expected turnout among groups that were expected to vote for Hillary Clinton.

Who were these “undecideds”?

We now know they included people who:
• rode an emotional roller coaster during the campaign and were affected by the many twists and turns;
• were ambivalent, did not see themselves reflected in either candidate and were more likely to stay at home than cast their ballot;
• were engaged in the election, but were having difficulty assessing how the candidates would address their more personal concerns; and
• were affected by social desirability bias, the tendency of survey respondents to answer questions in a manner that will be viewed favourably.

Successful pollsters are those who tap into the significant ambivalence or conflicted feelings that some voter segments face by introducing non-traditional survey questions and survey methodologies. One such poll, the USC Dornsife/LA Times Presidential Election “Daybreak” poll, engaged a longitudinal panel of voters and asked them a different set of questions than other surveys. It asked voters to estimate, on a scale of 0 to 100, how likely they were to vote for each of the two major candidates or for some other candidate. Rather than forcing respondents into an either/or vote position, they were able to obtain a more nuanced view of the undecideds (and the “decideds”) by measuring the level of engagement each voter had with the candidates. This approach has served USC Dornsife well in the past two U.S. elections, accurately predicting both outcomes.

The lesson for pollsters is that their models and approaches must evolve to meet the new challenges of political polling in the 21st century. And they also need a crash course in managing expectations.

fb_btn tw_btn

About the author:

Anne Kilpatrick
Anne Kilpatrick | Managing Principal
LinkedIn icon

Anne Kilpatrick has over 25 years’ experience harnessing traditional and innovative market research techniques among both general public and stakeholder groups to uncover insights that assist clients in developing policy, program and communication responses to their strategic challenges. Anne has extensive experience developing and executing qualitative and quantitative research programs for longstanding association clients, clients in the financial services sector, federal and provincial regulatory arenas, as well as the health care and not-for-profit sectors.

Anne is a highly seasoned qualitative researcher, with particular skills in the design and application of innovation techniques within group discussion formats and in focus group moderation. She often employs advanced projection techniques while moderating focus groups and innovation sessions. Having undertaken advanced training at the Creative Problem Solving Institute in Buffalo, New York and The Burke Institute in Minnesota, she is conversant with numerous projective techniques that allow research participants to more easily articulate the emotional and intangible considerations that underlie behaviour and attitudes.

Anne has extensive expertise in quantitative research including service and program evaluation, corporate image and image positioning, communications testing, market segmentation and employee engagement. She is adept at utilizing quantitative analytical techniques including factor analysis, cluster analysis, conjoint analysis, perceptual mapping, and multiple regression.

Anne works with regulatory bodies to explore stakeholder evaluations of new regulatory initiatives, perceptions of regulatory supervision, and emerging issues that may positively or adversely affect regulated entities in the marketplace.

She has worked with a cross-section of financial service companies to develop and implement strategic research in support of the development of new brand positions, communications campaigns, and product launches. Beyond these point-in-time studies, Anne works with financial services clients to interpret and act on longstanding studies that delve into changing trends in consumer behavior and attitudes on issues as diverse as payment technologies, retirement savings and financial literacy.

Prior to joining Navigator, Anne was a partner at a national public opinion and market research firm with offices in Toronto and Ottawa.

Enjoyed reading?

Get Notifications