After the 2020 Election, Polling Is Dead

Whatever the final outcome of the election, we know one thing for sure: the pollsters screwed up royally. And the heyday of celebrity pollsters seems to be coming to an end.

A poll worker holds a "secrecy folder" used to conceal a voted ballot from view in a polling place at Bloomfield United Methodist Church on November 3, 2020 in Des Moines, Iowa. (Mario Tama / Getty Images)

After the debacle of the 2016 election — when polls were off by an average of 5 points — polling institutions revised their methods. Some started dialing more cell phones. Others shifted focus toward swing states. Many began weighting responses by education. “Perhaps, after four years of hand-wringing, the polls will show they were all right after all,” FiveThirtyEight surmised after conducting its own survey of leading pollsters less than a month before the 2020 general election.

So far, the error rate looks the same. Bookmakers, looking at the polls, put their money on Democrats taking back the Senate. That now seems vanishingly unlikely. Polling outfits divined a blue wave that would expand the Democratic majority in the House. The Democrats lost seats.

If 2016 didn’t prove it, 2020 certainly did: the polling industry is in crisis.

While pollsters and pundits scramble to find the next fix — a new method for weighting demographics, an automated voice to coax “shy” Republicans — the real problem is that every year, fewer people pick up calls from unknown numbers. In 2019, Pew Research reported a continuation of the long drop in response rates to a new all-time low: six responses in every one hundred contacts. Commercial pollsters are thought to receive even lower response rates, but they decline to publish that figure. The sampling pool for opinion polling techniques has become so small that it fundamentally skews the data they return. Surveyors know this well and are doing what they can to compensate, but the truth is that random selection polling methods have been obsolescing for decades.

Even with good data, predicting an election using only polling data is like predicting the weather using only balloons — it has limitations. You might be able to tell which way the wind is blowing and put together a decent barometric profile, but you’ll see little detail regarding wider precipitation patterns, air fronts, and topographic features. Meteorologists today complement balloon soundings with Doppler radar, satellite imagery, automated surface observing systems, and a deep record of historical weather patterns in similar atmospheric circumstances.

Political analysts have access to myriad metrics that could offer a more complete sense of the atmosphere around candidates and policies. They could pair polling data with publicly available information on donors, volunteers, rally attendance, and more. Yet they hyper-fixate on polls, covering each sample as it’s published, rolling tickers at the bottom of the screen, and even using them as criteria for debate participation.

It’s true that by certain metrics, polls can be reliable. Pollsters have pointed out that 2016 predictions landed near the average error for the previous forty-four years (a favorable way, it should be said, of describing a data set that shows polling is about as accurate as it was in the 1990s). In 2012, analyst Nate Silver reminded critics that he called every state correctly in the general election.

But this election revealed that 2016 was not an anomaly. The heyday of celebrity poll seers is coming to an end. To be clear, that’s not because the stats analysts or the surveyors are bad. It’s because the data are.

The primary reason is that, while polling samples are randomly selected, they aren’t random at all. If it’s a phone poll, it only includes people

  1. with a phone
  2. who picked up, and
  3. who took the time to take the survey.

(It’s also common for studies to survey disproportionate ratios of landlines because they’re cheaper to call.)

If it’s a web poll (an increasingly popular alternative), it’s limited to people

  1. with internet access
  2. who responded to an invitation or ad, and
  3. who took the time to take the survey.

This narrowing sample pool is not only nonrandom, it’s non-representative. It skews toward white, elderly, and English-speaking voters, all demographics that are becoming less electorally consequential. Spanish-language outreach, in particular, has dogged election polling since its inception, and Spanish speakers now appear to be one of the large unforeseen factors in 2020.

Pollsters have responded to this quandary with new statistical methods that demographically weight results. While a sample may include a smaller proportion of a given minority group than exists in the electorate, those participants’ responses are multiplied in order to reflect the proportion of the population they represent. If you’re a young black man who responds to a poll, you are quantifiably assumed to statistically speak on behalf of your whole race and generation. Weighting can increase the accuracy of polls, but it can also amplify anomalies, and it’s no substitute for real representative sampling or robust multi-method data sets.

Yet even as these shortcomings become apparent, polling coverage seems to become even more popular. Why?

One reason may be the profit motive in media. To sell political entertainment, elections need to be exciting. Network news stations use polls to keep the race dynamic. It’s like a game show or, as they say, a horse race. Public opinions appear to fluctuate wildly, and anyone can come out on top. Covering polls also happens to be easy and creates consistent content.

Another reason may be that polls tell media executives what they want to hear — and, more important — what they want you to hear. News networks like Fox, NBC, and CNN are owned by media monopolies, like Disney, Comcast, and Time Warner. Polling data can be used to not only measure but to manufacture consent around establishment candidates, constructing concepts like electability. Polling appears to be losing its mystique in the public eye . . . at least, that’s what the polls say.

Polling is dying — not because of institutional bias or data misinterpretation, but because of social factors driving the gradual decay of its experimental design. “I’m concerned that the industry may be fighting the last war.” Dr Lee Miringoff, director of the Marist College Institute for Public Opinion, confessed to FiveThirtyEight in October.

Despite these patterns, many pollsters remain convinced that this year was just another minor embarrassment and that all they need is to patch a few more statistical holes to get polling back on track. It might even work for a while. But it won’t forever. They’re still sending weather balloons into a sunny Nebraska sky, backs turned to the opaque, tumultuous supercell of uncertain electoral futures forming on the horizon.