A few weeks ago, the Trump campaign yelled “fake news” about a CNN poll that showed Biden ahead by 14 points nationwide. Although it may be tempting to dismiss this as the usual election-year carping, there’s an question worth exploring here: How is that polls can show massively different results, even when they’re taken at the same time and supposedly assess the same population?
In our latest post on GitHub, we explain that the issue comes down to weighting. As a simple example, if I ask 1,000 Americans who they support for President, those people may not be representative of the likely electorate. So, I need to do some math to adjust for the fact that by chance my respondents might have been, say, 90% Republicans, or 90% Democrats. That was the essence of Trump’s complaint:
The Trump campaign hired a pollster to “analyze” the CNN poll and then demanded CNN retract the poll, which CNN promptly refused to do. The substantive objection made by Trump’s campaign was that CNN should have weighted their poll differently, such that the fraction of Republicans, Democrats and Independents was the same as among voters in 2016.
In the post, we explain why Trump’s suggestion doesn’t make much sense. But he inadvertently raised an important and interesting issue: There are many possible and credible ways to weight a poll, and no single one is “right”. We illustrate this point by looking back at the 2016 election and applying several reasonable weighting schemes, which yield wildly different results.
There’s more in the full post — please check it out! We hope it will make you a better-educated consumer of polling data as the election season continues.
Image: Craig Clark via needpix.com.