Politics & Law

Election polls: Why are they always wrong?

09 November, 2022

They’re not! But the digital age is straining traditional polling methods

By Mike Jewell
 

A person holding a signDescription automatically generated with medium confidence
“B****r the pollsters!” (New Zealand Prime Minister, Jim Bolger, 1993)

‘Tis the season to be voting and where there are elections, there are opinion polls. These account for just 7% of global spending on market research, but they are the public face of the entire insights industry. I’ll freely admit that when I started my career in market research, opinion polls were the only type of research I had ever heard of, but, throughout my 40 years in the business, I never touched a single one. And for that, I am extremely grateful!

Attempting to predict the voting intentions of fickle electorates is just a minefield. The search for a foolproof research method has been going on for a long time and many great minds have devoted themselves to solving the conundrum over the past century with varying degrees of success, but still the soul-searching and head-scratching goes on, especially when the actual results are at odds with the published predictions.

The first opinion poll is generally considered to have been held in 1824 when a local newspaper conducted a survey in Wilmington, Delaware, asking voters their opinions on the upcoming presidential election. The newspaper correctly predicted victory for Andrew Jackson.

This type of relatively unscientific polling continued throughout the 19th century and inevitably the accuracy of the results varied considerably. Gradually, slightly more rigour was introduced into the process, with The Columbus Dispatch the first to use trained interviewers to gather information, and to seek a balance of age and occupation among those surveyed.

In 1916, The Literary Digest correctly predicted Woodrow Wilson's election as president from a national mail out of millions of postcards and a simple count of the returns. The publication also correctly predicted the subsequent victories of Warren Harding, Calvin Coolidge, Herbert Hoover, and Franklin Roosevelt.

However, it got things spectacularly wrong in 1936, when its survey of 2.3 million voters suggested that Alf Landon would beat out the incumbent Franklin Roosevelt. Instead, FDR was returned by a landslide. Unlike The Literary Digest, George Gallup's smaller, but more scientifically based survey projected a win for Roosevelt and the reputation of the ‘Gallup Poll’ was set. The Gallup Organization continues to thrive today, while The Literary Digest went out of business soon after.

Gallup’s method involves careful sampling of a randomly selected, statistically average group of people, a principle which continues to serve as the template for most opinion polls today. However, the method is not infallible, as Gallup (and most other pollsters) found to his cost in 1948, when Harry S Truman’s presidential election win was completely counter to the opinion polls, leading to the famous photo (at the top of this article) of a triumphant Truman holding aloft a newspaper which pronounced Thomas Dewey the new president, having gone to press before all the votes had been counted. Oops!

Following the US’ lead, polling was adopted in many countries, with similar results – a mixed bag of successful predictions and calamitous failures. While I have been plying my trade in the relatively calm waters of commercial research, my pollster cousins in the UK have come unstuck a few times, most memorably with the national referendum over Brexit, where only 16 out of a total of 168 opinion polls correctly concluded that the outcome would be 52:48 in favour of the UK leaving the European Union. This error was bookended by polling that was also wide of the mark in the two general elections either side of the referendum, in 2015 and again in 2017.

Then there was the federal election in Australia in 2019 and the US presidential elections in 2016 and 2020, all of which turned out differently than the polls had predicted. The most recent presidential election in Brazil was a far, far closer race than the polls had suggested.

Each time the polls misfire, pollsters come in for a barrage of criticism and ridicule and they retire to lick their wounds, dissect their processes and seek to identify what went wrong, in the hope of being able to put things right ahead of the next round of elections.

In 2016 in the US, pollsters realised they underestimated the Donald Trump vote because they did not pay sufficient attention to the education divide, so they were able to offer specific recommendations to fix the problem. However, following the least accurate polls for 40 years in 2020, the American Association for Public Opinion Research’s post mortem was unable to determine with any certainty the reasons for the inaccuracies. Hmm, doesn’t exactly inspire confidence in opinion polls.

The case for the Defence
But we need to look further into the nature of opinion polls before we condemn them out of hand. The first point to make is that the vast majority of polls over the last 30 years have produced results that match the outcome of the actual election, based on analysis of some 30,000 individual polls undertaken by my former company, Kantar. Even in the closest races, 88% of surveys called the correct result.

Unfortunately, the other 12% are what the public, the media and aggrieved politicians focus on and there is a wholly unreasonable belief that poll results are absolute, with no allowance for the fact that these results come from sample surveys. The prevailing view is that opinion polls, like weather forecasts, are always wrong, even though the reality is that this is far from the truth.

Anyone who has any familiarity with any form of market research knows that, even with the most stringent sampling and standards of interviewing, there is inevitably a margin of error in the results. This is the reason why reputable surveys usually carry a health warning, like this from YouGov in the UK, “there is a 9 in 10 chance the true value of a [political] party’s support lies within 4 points of the estimates provided by this poll”.

One other crucial point is that every survey is nothing more than a snapshot of opinion at a point of time, rather than a definitive prediction, especially when taken more than a few days before the election. It is a fact of life that voters can and do change their minds.

In theory, therefore, surveys conducted the day before polling day should be the most accurate, but there is a host of other factors that can still derail a poll. Political affiliations are very personal, and voters don’t always give straight answers. Some are embarrassed about admitting they intend to support potentially unpopular candidates. In France, for example, so-called “shy voting” has led to under-reporting of the far-right Front National share by as much as seven percentage points on occasion.
 

There is also the problem of voters genuinely not knowing which way they will jump. Exit polling after one recent election in Germany found that 30% made up their mind in the final week of the campaign, 20% decided the day before the election and 10% only made up their mind once in the actual booth. Then, typically 10-20% of the electorate may fully intend to vote but end up not bothering at all.

Accounting for all of these factors is a major headache in designing credible surveys, while the changing nature of the social environment adds another layer of complexity. From the 1970s onwards, researchers were able to rely on national telephone networks and the automated process of ‘random digit dialling’ to deliver cost-effective, representative samples as the basis for polls, using George Gallup’s tried-and-tested template.

However, the advent of the internet and smartphones, combined with endless streams of unsolicited sales calls and ubiquitous ‘phone scamming has led to a dramatic drop-off in survey response rates. In the last 25 years, Pew Research Center, one of the most respected global survey institutes, has seen response rates tumble from 36% to just 6%. This seriously calls into question the reliability of any poll derived from telephone research, especially one which utilises landlines.

The industry is acutely aware of the issues it faces and recognises urgent innovation is needed to reflect public opinion more closely. It is only to be expected that many professionals will turn to cyberspace in their search for the holy grail of the perfect opinion polling methodology, but whether one can be found is extremely doubtful, given all the vagaries of electorates.

Much conventional online research relies on samples drawn from large access panels of people who have signed up to participate in surveys regularly in return for a variety of rewards. Online polling has been trialled using these types of panels, but there are fundamental questions about the ability of these panels to reflect the breadth of the electorate, especially in countries where there are lots of older voters who are less likely to be connected. These panels may also under-represent the ethnic diversity of the electorate.

Other researchers have focussed on capturing social media buzz as the path to predicting election results, but this approach has turned out to be as hit-and-miss as The Literary Digest’s postcard mailouts a century ago.

The polling industry worldwide is certainly in a state of flux, as it searches for a solution to its methodological conundrum and the leading poll providers are working to ensure a smooth transition from the old to the new. Current polls are an amalgam of new and continuing methods, but with methodologies forced to change rapidly from one election to the next, results are likely to be less dependable than they used to be. Long-term, though, online polls have to be the way forward.

One thing industry professionals agree on is the need for a solid theoretical basis for any survey design, for rigorous testing and full validation. And one of the negative influences professional pollsters have to deal with is poor quality polls conducted outside the industry. Often these are run by bodies who lack the knowhow to apply the correct rigour and controls, but just as often they are simply bogus, deliberately skewed in one particular direction as a form of promotion. The problem for the audience is how to distinguish between the legitimate and the phoney.

Polling in Taiwan
Polling in Taiwan began alongside democratisation in the late 1980s and today we are bombarded daily with a barrage of information supposedly reporting public opinion on every single aspect of the political landscape. Unfortunately, the polling industry has a poor reputation, being heavily criticised for its overtly commercial nature and the manipulation of polls during elections and the public has serious reservations about the quality and credibility of polls in Taiwan.

There are reputable, independent polling organisations in Taiwan, among which National Chengchi University’s Election Study Center is perhaps the most highly regarded. However, this type of institution is rare, and it is also significant that leading international research firms, such as Kantar and Ipsos, who have extensive opinion polling operations in other parts of the world, do not involve themselves with public pre-election surveys here.

The vast majority of published polls are conducted or sponsored by different media. In the increasingly desperate battle for precious ratings, newspapers and television stations use public opinion polls to attract more audiences with little regard for the niceties of scientific research theory. Attention-grabbing statistics are more important than survey accuracy.

The Taipei Times recently detailed the example of several media channels reporting the results of a “poll” based on 38 pedestrians approached in the street during a 90-minute live YouTube broadcast. This was taken to represent voting preferences in the Taoyuan mayoral election, despite the fact that the margin for error on such a woefully inadequate sample size would be +/-16 percentage points! Put another way, if one candidate’s expected share of the vote from the survey was 35%, the true figure could lie anywhere from 19-51%!

Meanwhile, political parties and campaign managers have harnessed the power of polls to manipulate the voting public. Elections over the past 20 years have been dogged by misinformation and heavily doctored surveys. Throughout the campaign for the 2006 Taipei mayoral election, polls consistently showed Frank Hsieh 20-24 points below his final tally, while the other two candidates were much closer to their actual result, raising suspicions of a co-ordinated propaganda campaign against Frank Hsieh. Even if there were no sinister plot, a discrepancy of 20+ points at least suggests some extremely amateurish and sub-standard research.

It does little for the image of polling that party-owned pollsters such as the DPP Survey Center, regularly publish their own election-related surveys. According to political scientist Christian Schafferer, associate professor at the Overseas Chinese University in Taichung, “most opinion polls are either fake or unprofessionally conducted. The media are biased and often misinterpret poll results or use unreliable data.”

The chart below shows some of the poll results for the 2020 presidential election, which give further reason to have doubts about polling in Taiwan. The chart plots the claimed support for President Tsai Ing-wen, as reported by two different pollsters, the Survey Center from the president’s own party, the DPP and the pro-opposition China Times. The polls were not exactly simultaneous and the DPP published six waves to the China Times five, but they do cover roughly the same period of 2019.  Across the eight months, the polls consistently reflect a large gap in the claimed level of support for President Tsai, with the opposition favouring China Times grossly under-representing her vote. (Come the day, President Tsai secured 57% of the vote, well above even the DPP’s most optimistic outcome.)

 

As the Taipei Times article shows, it seems that many opinion surveys are completed using questionable survey methodologies, with pollsters repeating many of the same basic errors made in the early days in the US, even if they are using the internet instead of postcards – e.g. one-question polls on political issues and voting intentions, conducted by popular websites that collect the voluntary daily responses of users with registered accounts, but without any form of scientific sampling. Such approaches only serve to reinforce negative views of election polls and further entrench the widespread belief that these surveys are completely untrustworthy.

Regaining public confidence in polls demands a major overhaul of the polling infrastructure, improving the quality of public opinion polls through the growth of politically independent polling organisations that are not beholden to the government, to political parties or the media, but are independently funded by academic institutions or endowments to protect their impartiality and facilitate investment in innovation.


Photo by Element5 Digital on Unsplash

Go Top