Political reporters often report poll results about candidates or issues without examining whether the polling sample makes any sense. Since a polling sample can almost single-handedly determine its results, this would seem like an elementary requirement of any competent journalist writing about a poll.[pullquote]You shouldn’t believe everything you read (or hear) in the news.[/pullquote]
Often, the result is that journalists fail to give the public useful information from polls and instead portray the skewed reality that the shallow reading of a poll usually produces. I have often wondered about this glaring hole in world of journalism – especially since it doesn’t really require any statistical knowledge, but just a few minutes of Internet research combined with a little common sense.
For instance, a new presidential election poll from the Washington Post found President Obama leading Mitt Romney by 7 points in Virginia (51-44), which many believe will be a crucial swing state in this year’s presidential election. This poll result came on the heels of a similar Virginia polling result from Public Policy Polling (PPP), which found President Obama with an 8-point edge over Mitt Romney (51-43). But what did the samples look like that produced this comfortable lead for the president?
It turns out that the polling samples contained a 6-point partisan edge for Democrats in the Post poll (Democrats were 31 percent of the sample, Republicans 25 percent, and independents 36 percent), and a 7-point democratic edge in the PPP poll (39 percent Democrat, 32 percent Republican, 29 percent independent). Given the importance of the sample’s makeup in influencing poll results, this begs the question of whether a 6- or 7-point Democratic sampling edge makes any sense as a reflection of the upcoming election?
In 2008 – a very good election year for the Democratic Party – exit polls from Virginia showed a 6-point Democratic advantage in turnout to vote (39 percent Democrat, 33 percent Republican, and 27 percent independent), resulting in a 6-point win (53-47) for President Obama in the state – similar margins to those in both polls’ samples and results. In other words, the primary piece of information in these polls is that if the 2012 election ends up like the 2008 election, President Obama will winVirginia in a similar manner – not exactly an earth-shattering result.
But is it really reasonable to believe that this year’s presidential election is going to end up similar to one of the best Democratic election years in recent memory? With a tepid economic recovery dragging down public approval of the president and an unpopular national health care law energizing his political opponents, this is an extremely unlikely result, barring some dramatic, game-changing political event between now and November. But in the reporting on these polls, this common-sense reasoning has simply failed to register with political reporters.
What’s more, a sample skewed toward Democrats has been a regular occurrence in Washington Post polls – for an illustration, see this article. In other words, in the case of polls from the Post, the issue of accurate reporting is not one of reporters simply missing the boat this time, but of repeatedly failing to address an issue that is continually staring them in the face.
As it relates to Utah, all of this goes to show that media coverage (or non-coverage) of numbers and statistics is not a great gauge of the numbers’ validity or lack thereof. Reporters are imperfect human beings just like the rest of us, with their own personal biases and lazy moments that make them just as prone to getting things wrong as any other reasonable, well-meaning person.
In other words, you shouldn’t believe everything you read (or hear) in the news.