4.6 Evaluating Public Opinion Data

Leading up to the 2016 Presidential Election, just about every poll showed Hilary Clinton ahead of Donald Trump. But as the polls closed, and counties began reporting their ballot counts it became clear that the polls were wrong. The polling conducted that year was scientific and followed the principles of the last section, but were wrong none-the-less. This section looks at the types of bias that can make its way into polling, and asks the question How do we determine the quality and credibility of claims people make that are based on public opinion data (polls)?

What you need to learn

How do we determine the quality and credibility of claims people make that are based on public opinion data (polls)?

Polls influence on policy & elections

the bandwagon effect

the Bradley effect

Bias in polling

social desirability bias

non-response bias

2016 General Election

What did the polls project, and why were they wrong?

Evaluating Public Opinion Data

What the public thinks and how that thinking is convened to government officials are factors in shaping public policies. Professionals try to measure public opinion for a variety of reasons, using methods that make the results as accurate as possible. Analysts and citizens alike should consider the legitimacy
of a poll as much as its general findings, because if its method is faulty, its findings will be as well.

Claims, Credibility, and Public Opinion Data

As participants in democracy either at or approaching voting age, you will be surrounded by public opinion polls and claims based on them. Knowing how to evaluate the quality and credibility of those claims will help you make informed decisions.

Public Opinion and Political Influence

Polls lend themselves to “horse race” news coverage in which elections are reported as if the most important aspect is which candidate is in the lead. Critics of “horse race” journalism argue voters need more substance, such as how a candidate views major issues that affect social policy or government spending This kind of media coverage can translate into significant political influence as well. National polling influences whose voice will be heard at the televised debate and whose would be silenced. For example, early in the Republican primary season in 2016, the first debate among the party’s candidates was being planned with 17 candidates vying for the nomination. How could a reasonable debate be carried out with so many people on stage? The host of the debate, Fox News, made a decision to limit the number of participants to 10. Fox would choose from the 17 candidates those who registered in the top spots from
an average of five national polls as the debate grew near. If anyone in the top ten failed to earn at least a 5 percent ranking in the polls, that person would be eliminated from the debate. In such debates, candidates with higher poll numbers are stationed toward the middle of the stage allowing them to appear
on the screen more frequently and say more.
National polling also exerts influence on elections through the bandwagon effect -a shift of support to a candidate or position holding the lead in public opinion polls and therefore believed to be endorsed by many people. The more popular a candidate or position, the more likely increasing numbers of people will “hop on the bandwagon” and add their support. People like to back a winning candidate. For this reason, most media outlets do not report the findings from their statewide Election Day exit polls until polls have closed in that state. If people who have not yet voted learn that Candidate A is way ahead in votes, they may not bother going to the polls, because they support either Candidate A (that candidate will win anyway) or a rival who was behind (that candidate has no chance of winning).

The bandwagon effect is also partly responsible for the direct link between a candidates rank in national polls and the ability to raise campaign funds. The higher the national ratings, the more campaign contributions a candidate can elicit. The larger a candidates war chest-the funds used to pay for a
campaign- the more ads a candidate can buy and the larger the staff a candidate can maintain. Both greatly influence the outcome of an election.

Influence on Policy Debate

Scientific polling also exerts an influence on government policy and decision-making, although its effects are less clear than on elections. The three branches of the government tend to respond to public opinion polling in somewhat different ways, if at all.

The legislative branch is sometimes responsive to public opinion polls, especially the House of Representatives where lawmakers face reelection every two years. Many try to represent their constituencies and to keep them satisfied with their performance to encourage fundraising and subsequent votes, so knowing constituent views pays off. Senators, with longer terms, do not seem
as sensitive to pressure from public opinion.

The executive branch has sometimes been influenced by public opinion and at other times has tried to use the power of the “bully pulpit” to shift public opinion. (See Topic 2.7.) A president usually enjoys high approval ratings in the first year of office and tries to use that popularity as a “mandate” to advance his or her agenda as quickly as possible.

The judicial branch may be influenced by the general mood of the nation. Different studies have drawn varying conclusions about why. However, many have concluded that when the opinions of the nation shift toward liberal, the Court will hand down more liberal rulings. This was apparent with the growing
liberal attitudes of the 1960s and the often-liberal decisions handed down by the Warren Court. (See Topic 2.10.) Conversely, when the nation moved toward conservative ideology near the end of the 20th century, the Rehnquist Court’s rulings often mirrored those beliefs. However, federal judges are appointed for life and are not at the mercy of the ballot box, keeping the judicial branch somewhat removed from the sway
of public opinion.


Reliability and Veracity of Public Opinion Data

One way to gauge the accuracy of a pre-election poll is to measure “candidate error”-the percentage point difference in the poll’s estimate and the candidates actual share of the vote after the election. Candidate error has gradually declined as polling techniques have become more sophisticated. But in the last few years, what has been a consistently improving science and practice, with the occasional setback, has had some less-than-accurate predictions.

For example, Gallup predicted Mitt Romney as the winner of the 2012 presidential election with 50 percent of the vote and President Obama at 49 percent. In reality, Obama won nationally by nearly four points. This failure led to Gallup’s eventual decision to no longer predict presidential election outcomes through the so-called horse-race polls, but to stick instead to its vast polling of issues and views in other areas of public policy. Gallup wasn’t the only firm that had an erroneous prediction outside the margin of error in 2012.

In the waning days of the 2016 presidential election, national polls projected that Hillary Clinton would defeat Donald Trump. Election forecasters, those who aggregate polls and other data to make bold predictions, put Clinton’s chances of winning at 70 to 99 percent. The final round of polling by most major firms had Clinton winning by anywhere from 1 to 7 percentage points in the national vote. However, the election was ultimately decided by 51 state elections (counting the electoral votes from Washington, DC). On the day prior to the election, 26 states had polling results with Trump ahead. His strongest support was in Oklahoma and West Virginia where 60 percent of respondents claimed a vote for Trump. 23 states had Clinton ahead. Maryland and Hawaii showed the strongest support for Clinton with 63 percent and 58 percent respectively. Once the vote was counted, Clinton won the popular vote by 2 percentage points but lost the Electoral College vote. Several factors may explain why polls may be inaccurate and unreliable. One factor relates to the psychology of the respondents. Another factor relates to undecided voters and when they finally make up their minds.

Social-Desirability Bias

The psychology behind the errors in recent polls is at least in part explained by social desirability bias- the tendency for respondents and declared voters to tell pollsters what they think the pollsters want to hear. Social desirability bias affects the predictions of voter turnout. Respondents may give the interviewer
the impression that they will indeed vote, because they do not want to be seen as shirking a responsibility, but often on Election Day they do not vote. In a recent estimate, when asked their likelihood of voting on a scale of 1 to 9, U. S. citizens tended to say 8 or 9, yet only about 60 percent of eligible voters cast ballots.

Social desirability bias can fool pollsters on matters beyond inflated turnout. Voters do not want to be perceived negatively, so they may give the interviewers a socially acceptable response, or what they perceive as the acceptable response, and yet act or vote in a different way. This phenomenon was noticeable in the 1982 California governor’s race. The election included a popular candidate, Los Angeles Mayor Tom Bradley, who would have been the state’s first African American governor. Bradley led by a clear margin in the polls throughout the campaign but lost on Election Day. Most experts attributed the discrepancy to interviewees’ falsely claiming they supported Bradley only later to vote for a white candidate. These poll participants did not want to appear bigoted or against the African American candidate. In what has become known as the Bradley effect, recent African American candidates have also underperformed against their consistently inflated poll predictions. Pundits in 2017 encouraged speculation as public opinion polls shifted in the special U.S. Senate election in Alabama. In some polls, Republican candidate Roy Moore, the favorite for weeks, was suddenly losing to Democrat Doug Jones after Moore was alleged to have committed sexual assault or aggressions toward several women when they were teenagers. Skeptics of the new polls pointed out that voters might not willingly admit on the phone that they were going to vote for this accused candidate. In fact, one famous political pundit, Nate Silver, pointed out that in polls using robocalls, or automated pre-recorded polls, Moore was ahead, and in polls using live interviews, Jones was ahead. Jones won in a close contest.

Undecideds Breaking Late

According to exit polling and research after the election, a likely explanation for Trump’s surprise win was that a larger than usual share of undecided voters “broke”-made their final decision late–for Trump. Nate Cohn of The New York Times explains how likely voters who said they were voting for a third-party candidate mostly did so. But 26 percent of those voters turned to Trump and only 11 percent switched to Clinton. Pollsters theorize that a disproportionate number of so-called “shy Trump voters” simply declined to participate in any polling opportunities. Perhaps the same anti-establishment, anti-media attitude that drew these voters to the outsider candidate also turned them away from pollsters, a phenomenon known as non-response bias.

Opinions in Social Media

The willingness of people to take part in polls is declining. About 37 percent of randomly called citizens would participate in a telephone poll in 1997. Today, pollsters get about a 10 percent response rate with live callers, and about 1 percent participation with robocalls. However, as Kristin Soltis Anderson,
author of The Selfie Vote, points out, “The good news is, at the same time people are less likely to pick up the phone and tell you what they think, we are more able to capture the opinions and behaviors that people give off passively.

Pollsters can take the public’s pulse from available platforms widely used by a large swath of the general public. Examining what is said on social media and in the Google toolbar can tell us a lot about public opinion. Though blogs and the Twitter-verse constitute a massive sample, the people active on social media may have very different views from those who are not active on social media, so the sample is not representative. A 2015 study found that people who discuss politics on Twitter tend to be overwhelmingly male, urban, and extreme in their ideological views. Another problem that makes this endeavor less than reliable is that researchers use computer programs to gauge the Internet’s dialogue but cannot easily discern sarcasm and unique language. And overly vocal people can go onto the Internet repeatedly and be tabulated multiple times, dominating the conversation disproportionately.

Biased Pollsters and Data vs. Fact

Reputable pollsters seek ways to avoid bias in sampling techniques and the wording of their questions. However, many polls are funded by political parties and special interest groups who want the poll results to tip a certain way. Interest groups will use those results to move their agendas forward, claiming that the data generated by their polls represent fact. “The numbers don’t lie, they might say. Parties may use information to convince the public that their candidate is popular and doing well among all voters or various blocs of voters. Unless you know about the organization doing the polling, the methods it used, the wording of the questions, and the context of the poll, you will not be able to evaluate a poll’s veracity, or truthfulness. You have already read about how push polls (see Topic 4.5) slant their questions to produce certain outcomes. Political Action Committees (PACs), special interest groups, and partisan organizations all have a vested interest in getting a response from a poll that supports their cause. To help journalists evaluate the reliability and veracity of polls, the National Council on Public Polls (NCPP) provides 20 questions journalists should ask and answer before reporting on a poll. You can find that list on the NCPP website. The checklist below provides some of the key questions to ask about any poll.