The political polling industry is a mess. Fewer and fewer people are willing to respond to telephone surveys, particularly automated ones, and the costs of live interviews are climbing ever higher. Meanwhile, polls have gained prominence in the political media (FiveThirtyEight itself is a part of this trend), and the Internet seems willing to give a home to almost any survey.
Demand is up, and quality supply is down. The result: pollsters that use nontraditional methodologies such as online and automated surveys are getting more press than ever, and they get included in the models of the main polling aggregators, including FiveThirtyEight, HuffPost Pollster and RealClearPolitics.
The problem is that many of these nontraditional polls may be cheating, adjusting their results to resemble higher-quality polls. We can see this by looking at polling from the final three weeks of Senate campaigns since 2006: in races without traditional, live-interview surveys (what we’ll call gold-standard polling), nontraditional polls have had significantly higher errors than they’ve had in races with at least one gold-standard poll. Gold-standard surveys appear to be the LeBron Jameses of the polling world: They make everyone around them better.
That’s how it’s supposed to work in basketball but not in polling, and this is a major problem for anyone watching 2014′s races. There hasn’t been a gold-standard poll released to the public at all for Alaska’s Senate race, in three months for Arkansas’s Senate race, in three months for Kentucky’s Senate race, ever in Louisiana’s likely Senate runoff, and in nearly four months for North Carolina’s Senate race. The only polls we can consider in these races were conducted by pollsters who have historically fared considerably worse as a group when the gold-standard pollsters weren’t around.
Read more here:: Are Bad Pollsters Copying Good Pollsters?
The post FiveThirtyEight: Are Bad Pollsters Copying Good Pollsters? appeared on Latest Political Polls.
Read more here:: Latest Political Polls