Can I Trust Polls?
Answers to your questions about opinion polls from the world's leading polling experts and tips to help you judge their quality. Can you trust them, and how do they actually work?
Explaining the world of opinion polls
Thousands of polls are published in newspapers, reports, and other publications every day. They help us measure the feelings of our neighbours or the wider public sentiment. In that sense, they are snapshots taken at a specific moment in time. The purpose of polls or their ability to forecast the future can be misunderstood, especially during election times. Not all polls are created equal, but how can you tell the difference between them?
We've asked our Professional Standards Committee expert on polls to answer the questions often posed to ESOMAR by those curious to learn more about the polls they see in their newspapers and elsewhere.
About Kathy Frankovic
Kathleen Frankovic
ESOMAR Committee Member, Consultant at YouGovKathy Frankovic is the former Director of Surveys for CBS News and is currently an election and polling consultant for YouGov and other organisations. She is one of the world’s leading experts in public opinion polling. She has been an election and polling consultant for CBS News and other research organizations.
She speaks and writes internationally about public opinion research, journalism and elections as an invited speaker in places as diverse as Italy, Jordan, Hong Kong, Manila, Mexico, Lisbon, Chile and India. In 2009 she retired after more than 30 years at CBS News.
She retired from full-time work at CBS News in 2009, and since then has consulted for CBS News, Harvard University, the Pew Research Center, Open Society Foundations and YouGov. She sits on the ESOMAR Professional Standards Committee and is an active member of both the World Association for Public Opinion Research and the American Association for Public Opinion Research.
The Basics
What are polls?
Polls are the most popular and best mechanisms for measuring public opinion. Good polls have two characteristics:
Respondents are chosen by the research organisation according to explicit criteria to representativeness, not by respondents who decided on their own to select themselves and participate, and
Questions are worded in a balanced way so as to not lead the respondents to answer in a particular way.
Polls that are trustworthy follow the guidelines of organisations like ESOMAR, and outlined in the ESOMAR/ICC Code when it comes to conducting and reporting public polls.
What are key questions I should ask when evaluating a poll?
It is important to ask questions about every poll you see details about:
Who conducted it?
Who paid for it?
What questions were asked?
When was it carried out – and, especially how was it conducted?
How are polls conducted?
Most polls are done by telephone or online, and can even be conducted face-to-face or by posted mail.
Polls conducted by telephone select their samples using information about the distribution of telephone numbers in order to draw numbers that are representative of whole telephone population. That can be achieved through random sampling.
Most online polls are drawn from panels of individuals who have agreed to be contacted by polling organisations. Companies use different procedures, but the goal is to create a data base and poll samples that represent the country.
Are they done differently across the world?
Most polls in Western countries are done by telephone or online.
In most European countries, polls conducted by telephone provide excellent coverage of the population, when both landline and (where legal) mobile phones are included. This often exceeds the coverage provided by polls conducted online. Internet access, while higher in Europe than in many other parts of the world, is only about 90%. Unlike for phone polls, there is no complete set of possible internet addresses from which to sample. People must opt-in to an internet polling panel. This raises questions about whether some people are being systematically excluded.
In some parts of the world polls are still conducted face to face and by posted mail. In many developing democracies, where there is limited landline phone availability and no easy access to sampling mobile phones, the only way to conduct opinion polls that represent the population is through in person interviewing: choosing locations based on population and then selecting individuals in randomly-selected households in those locations. In these cases, there may be limits to what the results represent– if they were conducted only in urban areas, for example, they may be a good reflection on opinion in the big cities, but rural citizens are excluded, and therefore not represented.
Don't polls get the results wrong a lot?
How accurate are election polls?
“The polls got it wrong!” This headline might lead people to think that opinion polls are not accurate but is this true?In a paper summarising an analysis of a database compiled by Kantar of 31,310 polls from 473 elections and voting events across 40 countries from 1936 to 2017, Jon Puleston concluded that when examined at a global level, polls are generally very accurate.
The average error of polls conducted within seven days before an election was shown to be +/-2.5%. (Are we getting worse at political polling, Jon Puleston, ESOMAR, September 2017).
The analysis also showed that the accuracy of polls would steadily increase the closer you get to the election event. Polls conducted before an election will have an average error roughly twice the size of a poll conducted in the last week of the election.
This can be because many voters make up their minds over the course of an election campaign, and opinions can change based upon campaign activities but also because of external events (strikes, scandals etc.), the number of undecided voters and even how many people actually turn out to vote.
A follow-up study in 2023 looking at 29 elections and 1,400 individual polls conducted between 2017 and 2023 shows very little change in the overall findings compared to what we learnt from the original analysis.
Most polls conducted within a week before an election are remarkably accurate average +/-2.5% variance with election results and 80% are accurate to within +/-3% and 98% correctly predict the outcome. The further out the polls is conducted before an election the bigger the variance gets and the more miss-reads on outcome and there is a simply linear relationship.
So why do election polls get it wrong sometimes?
There is so much coverage of election polls that when one is wrong those mistakes dominate the news and discussions about polling.There are several reasons for a mistake, besides sampling error. They may include:
not adequately determining who will vote (in some places less than half of the age-eligible population votes);
not having the correct distribution of the voting population;
ending the interviewing too early, missing the impact of last-minute events;
asking questions about voting intentions in ways that produce answers that misrepresent people’s true voting intentions. This may include people claiming they are likely to vote when they will not, or people misrepresenting their voting intentions because of embarrassment about their choice.
This can be done by the way questions are ordered. For example, asking questions about the problems an incumbent government faces before asking questions about vote intention can depress the expressed vote for the ruling party.
In multi-party contests, voters may vote strategically, and may use the results of pre-election polls to help them do this. So the poll result may be wrong as soon as it is published; the poll is not able to take account of last-minute events that can change vote intention.
This is more of a problem in countries that forbid the publication or conduct of polls in the immediate pre-election period. In some countries, there is a publishing embargo of two weeks or more. In those places, one can never know whether final pre-election polls are correct. But even in places like the United States, last minute decision-making and changing preferences can have a significant impact – as was demonstrated in the 2016 presidential election polls and results.
And every once in a while, according to the laws of statistics, drawing a sample of the population will give results that might NOT represent the total population well. But that does not mean you can always tell which poll that may be.
So, what went wrong in the UK 2015 general election?
The review of the 2015 United Kingdom polls has suggested that the polls missed big groups of voters -- those over the age of 70 and those who, for various reasons, were harder to find and interview. In addition, the polling overestimated the voting participation of younger adults. All those things resulted in an underestimate of the Conservative vote and helped produce the polling error. Individuals who conducted the British Polling Council’s review had no part in the polls reported in 2015. Many are academics and some have even criticized polling practices. In the past, serious reviews of major errors like this one have resulted in improvements in how polls are conducted and greater future accuracy.
What about the 2016 U.S. election mistakes?
Many analysts in the United States before the election assumed something that is almost always true: that the popular vote winner will capture enough votes in the Electoral College to win. Almost always true, but not that year. Hillary Clinton amassed nearly three million more votes nationwide than Donald Trump, two percentage points higher than Trump’s total national vote, but Trump received a total of 78,000 votes more than Clinton from three states (Michigan, Wisconsin and Pennsylvania), giving him the Electoral College majority and victory.
The final national election poll average gave Clinton a three-point lead. That is an overestimate of Clinton’s national margin, but a very small one, and in line with U.S. polling errors in recent elections.
But there were larger errors when it came to critical states, especially places like Wisconsin and Michigan, highlighting the differences between national 2016 voting preferences the structure of the US Electoral College system. The American Association for Public Opinion Research (AAPOR) investigation found several explanations:
There is clear evidence of a late surge to Trump. 13% of voters nationwide made up their minds in the last week. They gave an edge to Donald Trump.
In the critical states of Michigan, Pennsylvania and Wisconsin, those who decided in the last week voted for Trump by 11 points, 17 points and 29 points respectively, according to exit polls;
Polls may also have missed some voters, especially white blue-collar workers in the Midwest by not weighting for education. National polls almost always use education level as a weighting variable, but several of the critical state polls did not.
Are enough people being asked?
Don't the errors point to a sampling issue?Polls typically include about 1,000 adults. This does not mean that every poll includes 1,000 voters. That depends on the share of the adult population that votes. Sampling error increases the smaller the number of interviews included in the reported percentages. So, yes, there could be too few interviews with relevant -- that is, likely voters.
Why have them at all?
Why do we conduct polls?
Your opinion actually matters to a lot of peopleMany organizations want to learn what the public thinks. Those organizations include the government itself, the political parties, companies, non-profit and charitable organisations, and the news media. In a democracy, knowing public opinion helps parties to choose an agenda to campaign on, and gives government information on whether their policies are approved by the electorate.
Journalists, too, care about opinion, and reporting on opinion gives the news media good stories – that is something especially true before an election. Think tanks, non-profit and charitable organisations also may conduct surveys about social problems. Regardless of who commissions the poll, all polls need to be done well, otherwise they will give a distorted view of opinion (for example, if news organizations only interview people who visit their website, they are missing a large component of the public).
Finally, the public itself should care about national opinion. Since this information is being used by decision-makers, well-done opinion polls that are made public give the people access to the same type of information that governments and politicians use. Polls provide information that may affect the public.
Well-done and properly reported-on opinion polls democratise information.
What do we learn from polls that we wouldn’t know otherwise?
Most governments and institutional bodies, even many that are not democratic, claim that having an appreciation of public opinion is their primary motivation behind policy making and implementation. Thus, the “voice of the people” becomes an important element in the creation and maintenance of societal and political structures. Polls provide a way for government to learn the public’s positions between elections.
More visibly, an ever-growing need for news content has pushed the media towards more frequent use of opinion polls in their news coverage. Polls give a news organization data indicating whether or not the public supports the government’s policies or leaders. Through the media, the public also learns whether it supports or opposes government policies, what problems people are most concerned about, whether the nation is in favour of international engagement, and much, much more. For this reason, many efforts have been made to ban opinion polling – particularly before election periods.
Does having lots of polls make up for poll errors?
Probably not. Especially not if all or most of the polls err in the same direction, which can happen if they are all systematically underrepresenting some groups, or have some other flaw. And, of course, if opinion changes after the publication of the aggregated polls, all polls will have the same errors.
How do I make out the good from the bad?
How can I tell if a poll is any good?
Don't get duped by poor polls.A poll result is like any other piece of information. Good journalists check their sources before publishing; poll results need to be checked as well.
Here are questions that people should ask about polls in order to evaluate them. Any good pollster should be comfortable answering them, and should provide those answers when a poll is released, as transparency is one of the requirements in the ESOMAR/ICC Code, and the standards set by other research associations. The answers should not be hard to find; if they are, this may signal a problem with how the poll was conducted! In fact, the ICC/ESOMAR Code requires that this information be made available and easily accessible to comply with the Code of Conduct.
We recommend you ask:
• Who conducted the poll and who sponsored the poll?
• Was it done by a known company?
• Was it paid for by anyone like a political party that who had an interest in the outcome?
That may not make a poll wrong, but it is good to be a little sceptical, as there can be an ulterior motive in a poll release.
• Who were the respondents? Were they registered voters or all adults? Do they live in one region, etc.?
• How many people were interviewed?
• When were they interviewed?
• Has anything happened since the survey dates that could change opinion?
• How were the respondents interviewed – in person, by phone, online?
All of these ways of interviewing have advantages and disadvantages, but some are not feasible for certain kinds of polling or in certain places.
• What were the questions?
Are they balanced and neutral? That is especially important in key questions relating to elections and government policies. Question wording must not “lead” or influence the responses.
• Is the sample representative of the population is says it describes?
The sample must represent the group at interest – e.g., the entire population, voters, a certain subset of voters, etc.
See what the pollster is claiming for the sample. Whatever the claim, there are possible issues depending on how the poll was conducted.
What does representative sample mean?
A representative sample should be an unbiased reflection of what the population of interest is like. No group is excluded from the sample, every important characteristic is reflected in the final result.
In-person and telephone interviewing can and should be based on some kind of probability sampling:
in the case of phone polls, all telephones associated with households and individuals should have a chance of being selected. That includes mobile phones in places where a significant part of the population is only reachable through them. Face-to-face interviewing also covers the entire population, with probability selection of geographic areas, and then selection of households and individuals in those places. Results from these probability samples, if done properly, can be extended to the entire population – and a margin of sampling error can be calculated.
Online polling raises different questions. Not only are there people without internet access who are left out in online polls, but most online polls are not probability samples. They are conducted among people who have voluntarily joined panels (“opted in”).
Those who conduct these polls try to create “representative samples” reflecting the overall population, which can involve a very complex weighting process to take account of internet penetration. (There are some internet panels that have been selected using a probability design, with internet access given to those who do not have it).
Think of the question of representing the public the same way a spoonful of soup represents the entire bowl.
Understanding sampling is like drinking a soup.
Do you drink the whole pot of soup to determine whether it’s hot? Not usually.
Generally, you take a spoonful and “taste a sample” of the soup. From that “sample” — the one teaspoon you taste — you determine whether the whole pot is at the right temperature to serve. But you want to make sure that you do a good job choosing a random spoonful. You generally stir the pot and dip in the teaspoon, so that any given spoonful of that soup has an equal chance of ending up in your spoon. If you just skim a spoonful off the top or from the very bottom, without stirring the soup, you might come to a very false impression of the temperature of the pot. You would serve cold or extremely hot soup to your guests simply because you “skewed’ the sample from which you tasted.
What's better? Random or representative?
Making sense of different types of sample techniquesThe important quality of random sampling is the possibility that everyone has some chance of being selected at the start. True random sampling is becoming more difficult to achieve in practice for many reasons, but pollsters can overcome this and make the achieved sample of interviews resemble the population demographically and behaviourally. Done well, a random sample is a representative sample.
A representative sample, however, may be achieved in other ways. One is by use of quotas. Quota sampling involves setting controls and matching the sample to those characteristics. Some polls use very complex quotas, with many variables, including demographic characteristics like age, gender, geographic location, and sometimes political variables as well.
Some online polls employ quota sampling as a way to select representative samples from a database of people who have already provided information about themselves. Quota sampling may be used, in otherwise randomly sampled telephone surveys, to select the person to be interviewed within the household, in order to speed up the fieldwork process. Surveys based on quota sampling are often used in face-to-face surveys.
Quota samples can be representative in reflecting the entire population but they are not random, which requires that every member of a population had a chance of being included.
• Was the sample weighted and how was it weighted?
Some kind of weighting is almost always necessary, even in those polls based on probability sampling methods, because of difficulties in reaching everyone sampled. Pollsters should indicate how a sample was weighted, and which demographic or political characteristic were used to create the weights.
Should I care about response rates?
Is declining participation an issue for polls?In many telephone polls, especially in Western Europe and the United States, fewer than 10% of the people from telephone numbers in an original sample complete an interview. The percentage has been dropping for many decades, and raises questions about the representativeness of polls.
However, a low response rate does not make a poll automatically wrong. National general election polls in places like the United States have improved over time. And even in 2016, the national polls were off by only a point or two in reflecting the popular vote.
Pollsters have learned how to control for the low response rate by ensuring that the final sample is a good demographic representation of the country. So you may want to know the response rate, but it is by no means the sole predictor of polling accuracy.
Should there be more checks and controls?
Shouldn’t we ban pre-election polling?
Don't polls surely inevitably bias the results?pThere is NO credible evidence that pre-election polls create a bandwagon or underdog effect, at least not in two-party or--two-candidate contests. However, polls can be used strategically by voters in multi-candidate competitions, as voters weigh the possibilities of choosing among many candidates, and determine how best to use their vote. Sometimes voters might choose to vote for their second choice in order to prevent their least favoured candidate from winning, but it is always the choice of the voter to act upon the information they have whether that includes polls, the opinions of their peers, or other sources.
Banning the publication or the conduct of pre-election polls would not stop polls. Candidates and political parties would continue to conduct them, and would very likely release information selectively for their own benefit. Or they might even release “fake polls.” The public would lose the protection of well-conducted and reported polls from independent organisation abiding with internationally-recognised codes of conducts like the ICC/ESOMAR International Code, and there would be little time to correct fake polls.
How independent are polling companies anyway?
Surely the reason why the 2015 U.K. polls were all wrong is that they colluded with each other – or reported results that would match everyone else’s.There is little reason for pollsters to “collude,” or “herd,” as making your poll match everyone else’s, universally frowned upon. Pollsters are in competition with each other, and each hopes to have the most accurate pre-election polls. In fact, many companies lose money on pre-election polling but still conduct them as a democratic service and to show the value of their methods. George Gallup did this in 1936 and pollsters continue to do this. Regardless, Every responsible pollster must deal with the same problems.
Some companies work with political parties, providing parties with strategic information about public desires. But any work they produce for public consumption must be judged by the same standards as the work of non-partisan pollsters. Organisations like ESOMAR exist to provide guidelines for the conduct and the publication of public polls. ESOMAR has done so since 1948.
Should all polls be vetted and verified by the government?
lThis would NOT be a useful “solution.” Governments themselves are not always independent of partisanship and bias, particularly if the polls are asking the public to evaluate that government.
But there are national and international organizations that set standards for how polls should be conducted. ESOMAR, for example, provides a mechanism for public complaint when pollsters violate its guidelines.
The World Association for Public Opinion Research (WAPOR) is also concerned about polling standards. There are also national associations working with pollsters throughout the world that can be found on ESOMAR’s national associations listing.
After the polling “errors” in 2015 and 2016, shouldn’t we just give up on polls?
loiNo. Poll “misses” can provide learning experiences for pollsters – and also reminders for the press and public about the limitations of polls. For example, after the 2015 British election, a panel of experts noted the sampling weaknesses in the polling: too few voters over the age of 75, not reaching younger adults who would not vote, and the overall problem of getting the balances correct. Pollsters will attempt to correct for these problems in the future.
If anything, recent issues with polling point to the continuing need for knowledge about training about polls – not just for those who conduct polls, but also those who report on them. ESOMAR, WAPOR and AAPOR created with the Poynter Institute a free online, international course for journalists about opinion polls which you could find helpful. It can be accessed at www.poynter.org/newsu/
Grab your copy of the questions and answers
We have created a handy document for you featuring the questions and answers from the world-renowned expert on polling. Your document to better evaluate polls you encounter.