4.5 Measuring Public Opinion

What you need to learn

Understand the ways we measure the political opinions and preferences of the public.

Scientific Polling

sampling techniques

sampling error

question wording

mass survey

focus group

Types of Polls

opinion poll

benchmark poll

tracking poll

entrance poll

exit poll

Measuring Public Opinion

Mining the views of Americans has become a keen interest of political scientists and a major industry in this age of data. Candidates running for office want to know their chances of winning and which groups support them. Once elected, members of Congress want to know how their constituents regard proposed bills and how they view different types of government spending. These elected officials and their staff members monitor public opinion by reading constituents’ letters and emails, holding town hall meetings, and reviewing surveys that are conducted in their states or districts. News organizations rely on, and even sponsor, polls to see where the public stands on important issues and political candidates.

Measures

Done scientifically, polling is the most reliable way to assess public opinion. It entails posing well-developed, objective questions to a small, random group of people to find out what a larger group thinks. Public opinion polling now follows a sophisticated methodology.

Scientific polling began in the mid-1930s, and since the Gallup Poll correctly predicted the re-election of President Franklin Delano Roosevelt in 1936, measuring Americans views has become increasingly sophisticated and popular. Many universities have established polling centers and major television networks and large newspapers have created their own polling departments.

Types of Polls

Pollsters use different kinds of public opinion polls to gauge attitudes on issues or support for candidates in an election in a cross-section of the population. Benchmark polls are often the first type of poll used by a political campaign, often before a potential candidate has declared his or her intentions. Benchmark polls are used to measure support for a candidate and to gather information about the issues that people care about. Tracking polls ask people questions to measure how prospective voters feel about an issue and how they may vote on election day. Tracking polls used during the course of an election allow a campaign to “track” issues and how a candidate is faring with voters. This kind of information helps candidates make decisions for shaping the campaign, particularly in the final weeks and days of the election.

Entrance polls and exit polls are conducted outside polling places on election day to publicly predict the outcome of the election after the polls close. Polling services and the news media use these polls in national and statewide elections to help them offer predictions as well as to gain insight into the thoughts and behaviors of voters or to identify and analyze how different demographics actually voted.

Presidential Approval

Polls regularly ask about presidential approval. Approval ratings are gauged by pollsters asking whether the respondent approves, yes or no, of the president’s job performance. Presidents usually begin their term with a fairly high approval as the people, Congress, and the news media get to know them during the so-called “honeymoon period” Presidential “honeymoons” generally last a few months and are characterized by some degree of bipartisanship in Congress and generally positive coverage by the news media. Approval ratings usually reflect this time of good feeling and predictably these ratings will begin to decline as a president begins staking out positions on issues.

According to Gallup, presidents after Harry Truman average 45 to 49 percent approval over their term of office. Some of the highest presidential approval ratings have come when the nation prospered economically or when the country found itself in an international crisis and rallied around the president. The two highest recorded presidential approval ratings came after al-Qaeda attacked the United States in September 2001, when President George W. Bush scored 90 percent approval, and when his father, President George H. W. Bush, received 89 percent approval after leading a military coalition to oust Iraqi dictator Saddam Hussein from Kuwait in 1991.

However, approval ratings change, sometimes dramatically. The approval rating of President George H.W. Bush dipped to 29 percent amid high unemployment and racial discord just one year after record highs. The approval of his son, George W. Bush also dropped significantly to below 30 percent, following what many believed was an ineffective response to Hurricane Katrina and skyrocketing gas prices by the end of his second term.

Of the 12 chief executives following President Truman, six averaged an approval rating of about 47 percent and six averaged about 60 percent. According to RealClear Politics, President Donald Trump’s job approval average at the end of his first full year was 39 percent and rising to his highest rating of 49 percent in January 2020.

Poll respondents are also often asked: “Is the nation on the right track or wrong track?” That question is commonly asked to determine Americans’ satisfaction with perceived success of the nation. A positive “right-track” response generally means incumbent presidents will fare well in their re-election campaigns, while a high “wrong-track” response will make incumbents uncomfortable at election time. The generic party ballot simply asks respondents if they will vote for Republicans or Democrats during an upcoming election without mentioning candidates’ names. Analyzing responses to these questions together serves as a relative measure of citizen support for each party.

Focus Groups

A small group of citizens-10 to 40 people–who are gathered to hold conversations about issues or candidates form a focus group. Though less scientific than many types of polls, focus groups allow for deeper insight into a topic. Pollsters can ask follow-up questions and examine body language and intensity that would be missed in a simple automated questionnaire over the phone. For example, Republican presidential candidate Mitt Romney began wearing jeans more often when campaigning in the 2012 election after focus groups responded more positively to him in jeans than in formal clothes.

Methodology

Reliable pollsters take great pains to ensure their measurements are legitimate. They do so by constructing questionnaires with properly worded and appropriately ordered questions and selecting a representative sample from which to analyze the data and draw the appropriate conclusions.

Questions

Pollsters phrase survey questions to avoid skewing the results. The wording should be objective and not emotionally charged. Poll results on such emotional issues as abortion, same-sex marriage, and affirmative action can be distorted depending on the wording. On foreign aid, imagine how the following two questions would bring noticeably different results: “Should the United States provide foreign aid to other nations?” and “Should the U.S. give foreign aid to other nations if such a decision would lead to higher taxes in this country?” Question order can also affect the results. In a 2002 poll on President George

W. Bush’s performance, for example, researchers asked the same questions but in a different sequence to two different groups. When people were asked first about the performance of the president and then the direction of the country, the president fared better. If respondents were asked about the state of the country first, which many said was bad, then the president’s approval dropped 6 percent.

How a question is framed also affects responses. Framing a question means posing it in a way that emphasizes a certain perspective. For example, researchers found that respondents had widely varying views on whether abortion should be legal depending on how the question was framed.

Only 28 percent of Americans believe abortion should be legal under all circumstances, while many more supported abortion when the question was framed with a certain condition emphasized, as the chart below shows.

Sampling Techniques

Which people are polled is just as important as the question’s nature and wording. Proper sampling techniques assure an accurate poll with a random and fair representation of the population. The pollster takes a representative sample, a group of people meant to represent the large group in question, known as the universe. A nationally representative sample often has about 1,500 respondents, while a sample to determine public opinion within a single state would be much smaller. Pollsters must obtain a random sample. That is, every single member of the universe must have an equal chance of selection into the sample.

A reporter or marketer standing on a street corner asking questions to passersby may determine some indication of public opinion, but this system is not random, because the person collecting the data may have biased who was included in the sample by approaching only those people who look “safe” or who otherwise look like they might be more willing to participate in the study. Since the 1980s, pollsters have used telephones as the primary contact for surveys, though there are concerns with this method. For example, roughly 30 percent of the populace has an unlisted number either by choice or because of mobility. To make telephone polling more reliable and efficient, pollsters use random-digit dialing. A computer randomly calls possible numbers in a given area until enough people respond to establish a representative sample.

Though technology has advanced, reaching voters has become more challenging. Landline use is dropping. About 95 percent of American adults own a cell phone, and a majority of homes have wireless-only phone service. More than 70 percent of all adults aged 25 to 34 years old use cell phones only and do not have landlines. Pollsters are trying to combat this phenomenon in a few ways. One is mixing their broadly dialed, automated random phone surveys with more actual human interviewers. Federal law prohibits pre-recorded interactive surveys to cell phones. The Pew Research Center requires that 75 percent of their samples are cell phone participants.

Once the pollster has enough respondents, he or she checks to see if the demographics in the sample are reflective of those of the universe. If disproportionately more women than men answer the phone and take the poll, the pollster will remove some female respondents from the sample in order to make it proportional. If a congressional district contains roughly 25 percent African Americans, the sample needs to mirror that. Manipulating the sample to compensate for this is known as weighting or stratification-making sure demographic groups are properly represented in a sample.

Sampling Error

Even the most cautious survey with appropriate sampling techniques cannot guarantee absolute precision. The only way to know what everyone thinks is to ask everyone and assure they are entirely honest, both of which are impossible. Every poll has a sampling error, the difference between poll results, also called margin of error. The sample size and the margin of error have an inverse relationship. That is, as the sample gets larger, the margin of error decreases. The way to determine this sampling error is to measure the results in two or more polls. For example, the same basic poll with two similar samples revealed that 55 percent of the first sample opposed a particular congressional bill, while 58 percent of the second sample opposed the law. This poll has a sampling error of 3 percent. A margin of error of plus-or-minus 4 percent or less is usually considered satisfactory.

Non-Attitudes

The simplest yet most perplexing problem in public opinion polling is the presence of non-attitudes. Many people do not have strong opinions on the issues of the day, or they are uninformed or simply concerned about their privacy and do not want to share their views. Just over half of eligible voters actually cast votes in presidential elections. Matters of extreme importance to journalists and policymakers may be unimportant to average citizens, so while poll results measure the views of average citizens on these matters, they don’t show the relative importance of the matters to citizens. In a similar way, matters important to citizens may not be of interest to journalists, so polls may not reflect what is really on the minds of voters.

Another phenomenon affecting poll results is the high frequency of uninformed citizens responding. Political scientist Herb Asher explains a poll asking about the repeal of the Public Affairs Act. In reality, no such act or repeal effort existed, but fully 43 percent of those questioned had an opinion of the nonexistent law. Pollsters often ask screening questions to establish a respondent’s knowledge or to ensure they are registered voters, such as “Do you plan to vote in the November election?” Such a question, however, does not eliminate the problem entirely. In fact, more than 90 percent of people answering phone surveys claim they will vote while far fewer do. Discerning pollsters may even ask if the respondent knows the day of the upcoming election as a way to increase the chances that the respondent is a bona fide voter.

Human Bias

How the interviewer contacts and interacts with the respondent and the respondent’s views can also impact a poll. The difference between mailed questionnaires and telephone interviews is stark. People are more honest with the anonymity of a paper questionnaire than a live telephone call. Some studies show women and men answer differently to male or female callers. A woman’s right to choose an abortion was supported by 84 percent of women when interviewed by females, while only 64 percent gave a pro-choice response to a male caller. Race, or perceived race, can matter as well. Asher claims that African Americans are more critical of the political and criminal justice system to black interviewers while more supportive to white interviewers. White respondents are less likely to reveal attitudes of racial hostility when interviewed by African Americans than by whites.

Still other problems exist because not everyone conducting a poll represents an objective journalist or an academic. Fundraising under the guise of polling has cheapened polling’s reputation. Political parties and candidates use phone and mail surveys to assess where their followers stand and then ask for a donation.

Also, push polling is a controversial and deceptive way to influence potential voters. Push polls are done via the telephone and rather than a series of neutral questions meant to determine public opinion on a candidate, the caller, or more commonly a tape-recorded voice, offers positive points about the candidate or negative points about the opponent. These attempts to “push” certain views on people aren’t actually polls but a form of political campaigning.

Internet polling can be problematic because there is no way to assure a random sample. When directed toward an Internet poll, only those strongly motivated will participate. Some online polls allow respondents to complete the questions as many times as they like. Internet users also tend to be younger, better educated, more affluent, white, and suburban than those offline more often and do not represent a genuine cross section of society.