Monday, June 04, 2007

An inside look at the push-pull of polling

An inside look at the push-pull of
polling

The earliest presidential election cycle on record is also becoming the most polled, with new national numbers and key state surveys coming out weekly or even more often. But getting to those numbers is more complex than it may seem from the outside, and polling often looks deeper than the simple horse-race aspects.





University of Iowa political science professor David Redlawsk researches how citizens process political information in order to make a voting decision and has conducted many polls, including a study of Iowa caucus goers conducted in late March and released in early April.

The Iowa caucuses are notoriously resistant to polling because screening for likely caucus goers is difficult. Redlawsk said in a recent interview that screening methods are a closely guarded trade secret:
“Commercial polling firms do not want to give away their techniques for trying to figure out the likely caucus goers. Usually it involves a couple of
screening questions to determine past behavior -- attendance at caucus -- and
to provide a scale on which respondents can indicate their likely future behavior -- those who place themselves at the top of that scale are generally considered the "likely" attendees. But it is a guess at best. Since some 40 percent of caucus goers may well be new to the process each time, it's pretty hard to figure out ahead of time who they are.”

Once the caucuses are done, polling firms don't usually follow up to see who actually attended. While lists of people who voted in an election are public record, caucus attendance lists are in the hands of the parties, who charge candidates hefty fees for the lists “Routine polls can't really follow up -- it's extremely costly to do it and it doesn't really help all that much,” said Redlawsk in the interview. "The American National Election Study (and some others from time to time) has done voter validation studies where they have gone back to see if people who claimed to have voted in post election surveys actually had voted. They find voting turnout overstated pretty significantly -- something like 10 percent or more.”

Another issue is the "white lie" phenomenon. Historically, African-American and other
minority candidates have done better in advance polls than in actual election results. The classic case is the 1989 Virginia governor's race, when Democrat Doug Wilder was well ahead in advance polls. Yet he won the election by less than 1 percent, becoming the nation's first elected African-American governor. Redlawsk says addressing these issues is tricky:
"In our polling here at UI we asked a couple questions. One set was whether it was important to you that a candidate be of your same race or gender. The vast majority of people said it didn't matter, of course, as you would expect they would. We also asked Democratic caucus goers if 'the fact that Clinton is a women will be a problem for her' and 'the fact that Obama is black will be a problem for him.' People were much more likely to suggest these things were problems when put this way."


In contrast, the April UI poll showed likely Republican caucus goers viewed John McCain's moderateness and Rudy Giuliani's pro-choice stance as greater obstacles than Mitt Romney's Mormon faith.

Changing technology has presented pollsters with new problems. One major trend is the move away from land lines and toward cell phones, which tend to be unlisted. The cell-phone-only trend skews heavily toward young people.
"Cell only is a problem," says Redlawsk, "and refusal rates can be a problem. The answer is usually that pollsters 'weight' their samples to match known population parameters, like education, age, gender, etc., so that members of the sample may represent more or less than one actual person when it comes to statistical analysis. It's complex, and weighting schemes are often closely guarded, but it does work reasonably well statistically."


One type of poll that's not really a research poll is a "push poll." These are actually persuasion calls and are usually negative ("would you be less likely to support Howard Dean if you knew he ate kittens for breakfast?") Redlawsk says the average voter might have a hard time distinguishing a push poll from a legitimate
poll. He notes, “Push polls will ask questions that knock down one or more candidates -- but legitimate message testing may do that as well.” Citing the UI poll’s questions about candidate’s religion, race, gender and personal lives, he
said, “That sounds like a push poll question, but our legitimate intent was to see whether perceived candidate flaws impacted voters' perceptions of the candidates.”

Push polls generally collect little actual data, said Redlawsk, and they go out to a larger group of people -- tens of thousands rather than a random sample of 1,000 or so. "But the recipient can't really tell the difference in those terms."

Redlawsk says push polling and telemarketing in general have dragged down response rates for "legitimate" polls a lot. "It's one of the big frustrations of telephone polling." He says pollsters deal with the lower response rates by doing statistical adjustments to the sample actually contacted to make it reflect known demographics of the population.

One approach to addressing higher refusal rates and unlisted cell numbers is Internet polling, in which respondents have actively chosen to participate. Zogby has been trying this approach. "Internet polling is still pretty uncertain," says Redlawsk. "Zogby seems to get good results, but they must be doing some kind of weighting process. The challenge with net polls is simply that it is not a random sample and all our statistical analyses assume random samples in order to be able to generalize to the public as a whole. I don't know what Zogby is doing to overcome this, but their numbers don't seem all that out of line."

Redlawsk recommends this PBS article from its Savvy Voter 2004 election site as a good citizen's guide to evaluating polls. He adds, " I don't take any one poll all that seriously. What I look for is a confluence of polling: Do multiple polls all point in the same
direction? In the end, mistrust every individual poll, but trust patterns.”

No comments: