The news is by your side.

One change to our survey: we’re retaining respondents who abandon the call

0

On Saturday, we’ll release the results of the latest New York Times/Siena College national poll, including what voters think about the candidates, the election and what voters think about the state of the country.

This time we’re making a modest methodological change that we wanted to tell you about up front: we’re retaining respondents who started our survey but then ‘dropped out’ before the end of the interview.

It’s a bit weird (6/10, I’d say), but hopefully useful to those who follow our polls closely. It does change our results, albeit by only a percentage point.

Here’s the basic problem: the interviews for our national surveys are conducted over the phone (usually cell phones) and take about 15 minutes to complete. About 15 percent of respondents who tell us how they will vote in the upcoming election decide – politely or not – to stop taking the survey before answering all our questions.

We call these respondents ‘drop-offs’.

Astute readers of this newsletter know that we have been interested in “drop-off” respondents since our experiment in Wisconsin in 2022. The ‘drop-offs’ vote less often, are less likely to have a university degree, are younger and more diverse.

These are exactly the kinds of respondents that pollsters already find difficult to get into the polls, which makes it all the more frustrating that we’re losing a disproportionate number of them while a survey is underway.

Even if there is no effect on the outcome, losing these respondents reduces our response rate, drives up costs, and increases the need for “weighting” – a statistical technique to give more weight to respondents from groups that would otherwise be underrepresented. At worst, the drop-offs may have different political views than the demographically similar respondents who complete the interviews, biasing our survey toward the most interested respondents.

Over the past eight Times/Siena polls, we’ve evaluated the impact of losing these voters and experimented with how to keep them. The only visible indication of this experiment is that we asked about age and education high in our surveys; questions that enabled us to better assess behind the scenes how these respondents differ from each other.

Despite their demographic characteristics, drop-off respondents are more likely to support Donald J. Trump than those who complete the survey. In the last eight Times/Siena surveys, Trump had a nine-point lead over President Biden among drop-off voters, compared to a three-point lead among those who took the survey. Strikingly, this Trump advantage survives or even grows after taking into account the demographic characteristics we use for weighting, such as race and education. As a result, the average Times/Siena result among registered voters would have shifted from Trump +3 over the past eight surveys to Trump +4.

This one-point shift is not consistent across polls. But so does our last Times/Siena poll in December, which showed Trump with a two-point lead among registered voters and would have had a three-point lead if we had kept the drop-offs in place.

That includes the Times/Siena poll that we will release Saturday morning, which without the drop-off respondents would be one point better for Mr. Biden.

It is not common to keep the drop-offs. I think almost everyone would agree that it is worthwhile to include these respondents in a survey, but there are serious practical challenges to doing so.

The difficulty revolves around the question of how to deal with all those questions at the end of the survey that have not been answered by a large proportion of respondents.

This poses two specific problems.

One of these is weighting: a respondent who drops out does not answer the demographic questions we use to ensure a representative sample. The solution here is relatively simple: ask the key demographic questions at the beginning of the survey and consider anyone who gets past these questions as a “completed” interview.

Second, and more challenging, is reporting the results of the later questions in a survey.

Imagine that the last question on a poll asks whether respondents are liberal, moderate, or conservative, and the respondents say they are 25 percent liberal, 35 percent conservative, and 40 percent moderate. Imagine that 15 percent of the initial respondents have also lost weight at this point in the survey.

If we keep the dropouts and do nothing else, the industry standard is to report a result like 21-30-34 with 15 percent unknown dropouts, instead of 25-35-40. That would be frustrating for many questions. It might even lead readers to complain that we have too few liberals or conservatives, if they don’t calculate how many we could have had without the fallout.

Worse still, the respondents who answer at the end of the survey will not be representative of the entire population. After all, the drop-offs are disproportionately non-white, young and less educated. This means that the 85 percent of respondents who ultimately answer will be disproportionately white, old and highly educated.

Oddly enough, retaining the drop-off voters will usually lead to biased poll results in return for the decrease in the number of questions towards the end of the survey.

For the first half of the survey, we will report results from the full group of 980 respondents who responded to the questions used for the weighting, including the 157 respondents who were later included in the survey. They will be weighted the same as a regular Times/Siena poll.

For questions asked after the demographic questions used for the weighting, we report the results for the 823 respondents who completed the full questionnaire. This is the group of people who would have been the entire outcome of the Times/Siena poll in the past. They will be weighed separately in the same way as a regular Times/Siena poll, with one twist: they will Also are weighted to match the general election results of the full sample, including attrition.

You may notice the most obvious change: there are 157 fewer respondents in the second half of the survey than in the first half. But there’s more to it: the demographic makeup of the 823 respondents will differ slightly from the full sample, because even weighting doesn’t force a perfect match between a poll’s characteristics and its target population. Hopefully readers will find this acceptable; if not, there may be other options we can use in the future. After all, this is our first time trying this. I expect we’ll gradually get better at figuring out how to present these results, especially as we see what other people are noticing.

So if you’re unhappy when you look at our survey results tomorrow, let us know!

Leave A Reply

Your email address will not be published.