It’s now been just over a couple of weeks since the 2016 presidential election was concluded, and virtually all of the forecasts were wrong. Polls predicted a small but persistent lead for Hillary Clinton. Even respected forecasters like FiveThirtyEight were predicting as late as the morning of the election that HRC would win. And while she did succeed in winning the popular vote, Donald Trump ultimately won the election by amassing well over the required 270 electoral votes.
How did the forecasters get it so wrong? And given the close relationship between the techniques that both polling companies and market researchers use, what lessons can market researchers take away from the election?
The Root of all Evil: Sampling Error
The wrong predictions are rooted in many causes, but from a market research perspective, they all boil down to sampling error. What is sampling error? Instead of trying to collect opinions or feedback from everyone, which is obviously not feasible, researchers collect feedback from a smaller subset, or sample, of the population, and extrapolate conclusions from that data. When the sample doesn’t accurately reflect the larger population, researchers are far more likely to draw the wrong conclusions.
How did sampling error play into the missed forecasts about the election? One of the most controversial hypotheses before the election was that many Trump supporters, embarrassed by his positions and rhetoric, declined to identify themselves as supporters to polling organizations that contacted them, the so-called “Shy Trump” effect. While specifics on the voting data is still coming in, and will be analyzed for decades to come, early data indicates that this fear is correct. Many Trump supporters, especially women, have since told exit polling organizations, that they were reluctant to share their support for Trump.
The Very Model of a Modern Major General (Election)
A closely related issue is the model of the voting electorate that the forecasters used. Simply assessing the sentiment of a sample of the general population is not sufficient. Forecasting the outcome of a vote means making assumptions about who will actually make it to the voting booth to cast a vote, and then make sure their sample reflects this makeup. Pollsters, like market researchers, slice the population into actionable segments they can contact, like “soccer moms” or “auto intenders.” With a groundswell of support from certain groups of voters that were generally underrepresented in most polling models— for instance,
white males in the midwest states with less than a college degree, but also certain sectors of the Hispanic electorate — it’s not a shock that the models got the outcome wrong.
The sampling error is further compounded by researchers’ ability to contact individuals, period. The traditional technique for polling was to randomly telephone individuals and have a person ask questions. In a time when virtually the entire population had landlines, and could reliably be counted on to answer them, this was a great technique. But a broad variety of technologies have made the simple contacting of panelists much more difficult. Landlines have been in decline, in favor of cell phones, which marketers are actually legally prohibited from calling by an automatic dialing system. Even the ability to screen calls makes it that much easier for potential panelists to avoid being contacted.
Not All Bad
Market research is impacted by all of these factors. And yet, there is cause for optimism. Companies specializing in finding panelists from a broad variety of backgrounds have sprung up over the last few years, facilitated by the internet. Even though the panelists from these companies are generally compensated, which introduces its own set of biases, they’ve “raised their hand” and are available to ask questions. This virtually eliminates the “Shy” phenomenon.
Similarly, the fact that these panelists have raised their hands greatly reduces the “contactability” issue. There may be timing to consider -- it will always be difficult to get a large number of responses in an hour, for instance -- but generally panel providers have contact details, and permission, from their panel members.
Technology makes modeling the desired population easier too. Marketers generally develop highly detailed models of their desired audience. Many panels available for commercial use have deep background data on individual panel members, collected when they sign up or over time, making the construction of a representative panel matching a marketer’s needs much easier.
Market researchers can also use data to refine their survey taking experience. For instance, at Veritonic we monitor the feedback and completion rates on our surveys closely. Our surveys largely consist of listening and responding to music and similar audio, and panelists taking our surveys tell us it’s a much more enjoyable experience than other market research experiences they’ve participated in, and we constantly think about how to make it an even better experience.
The results of the 2016 election should give everyone reason to pause and reflect. But market researchers should not be overly concerned that the missed forecasts require tossing out all of the survey techniques that have been honed over the past hundred years.