Now that we have a little more data about how voters are processing the new Kamala Harris-versus-Donald Trump presidential matchup, the poll unskewing — efforts to prove that certain unfavorable survey results are missing the mark — has begun.
Right out of the gate comes Tim Saler, a data consultant for Mr. Trump’s campaign, who takes issue with the latest CBS News/YouGov poll showing Ms. Harris ahead of Mr. Trump by one point nationally and running very close to Mr. Trump in the key battleground states. In an internal memo that the Trump-Vance campaign made public, Mr. Saler writes:
The latest CBS/YouGov poll of registered voters nationwide showing margin-of-error shifts in the national head-to-head ballot between President Trump and Kamala Harris is entirely the result of a methodological decision allowing ideology to change significantly, while maintaining weights on age, partisanship and race to make the survey appear not to have been manipulated. Without this manipulation, President Trump would be maintaining a 51-49 lead in their Aug. 4 survey.
There’s a lot of interesting and potentially controversial stuff going on with this poll and how it estimates the results in the battleground states, but for today, let’s unpack Mr. Saler’s particular complaint. His contention is that the only reason the poll shows Ms. Harris doing so well is a “methodological decision” about what factors to hold steady and what factors to allow to shift from poll to poll, with the implication that there were choices made intentionally that “manipulated” the results. (A Trump campaign senior adviser, Brian Hughes, went further, calling it a “national gaslighting campaign.”)
Whenever we pollsters conduct a survey, we know that the sample of people we talk to may not exactly match up demographically with the broader population. As a result, we adjust our data to align with known benchmarks, a process known as weighting. The New York Times/Siena College poll, for instance, weights its results along a dozen dimensions, and that’s just for understanding registered voters, to say nothing of the additional modeling that goes into the poll’s results among the likely electorate.
Weighting a survey by factors like age and race is quite standard and is a basic matter of good research practice. Weighting a political survey by party identification is also fairly common these days, though not without controversy and complication; while my age will always be a fact rooted in the year of my birth, I may wake up tomorrow and decide I am no longer a member of the political party with which I identified yesterday. Pollsters have robust and friendly debates over the best ways to keep a poll from over- or under-sampling people of a certain party. (And yes, when it comes to sampling, pollsters sometimes get it wrong.)