A new set of Times/Siena polls, including one with The Philadelphia Inquirer, reveal an erosion of support for the president among young and nonwhite voters upset about the economy and Gaza.
My personal opinion is that polling methodology may have overcorrected for 2020, and we’re getting a picture now that’s skewed right, versus left from beforehand.
I won’t say that you’re wrong about what the pollsters are doing – but to me this strikes me as very obviously the wrong way to do it.
If you find out your polls were wrong, and then instead of digging into detail as to what exactly went wrong, and then fixing the methodology going forward, using non-phone polls, doing a more accurate calculation to make sure you’re weighting the people who are going to vote and not the people who aren’t going to vote, things like that … you just make up a fudge factor for how wrong the polls were last time, and assume that if you just add that fudge factor in then you don’t have to fix all the things that went wrong on a more fundamental level, that seems guaranteed to keep being wrong for as long as you’re doing it.
Again I won’t say you’re wrong about how they’re going about it. (And, I’m not saying it’s necessarily easy to do or anything.) But I think you’ve accurately captured the flaw in just adding a fudge factor and then assuming you’ll be able to learn anything from the now-corrected-for-sure-until-next-time-when-we-add-in-how-wrong-we-were-this-time answers.
That’s the thing, we don’t know how they’re correcting for it, and if it is just a fudge factor. The issue is there’s more confounding factors that anyone could list which could be the culprit here.
A fudge factor is easy, but the wrong solution here. But the right solution is incredibly complex and difficult to even identify. In my field we can get away with using a timer instead of a precise calculation sometimes. That really isn’t an option for polls. I don’t favor the people trying to fix the models.
I won’t say that you’re wrong about what the pollsters are doing – but to me this strikes me as very obviously the wrong way to do it.
If you find out your polls were wrong, and then instead of digging into detail as to what exactly went wrong, and then fixing the methodology going forward, using non-phone polls, doing a more accurate calculation to make sure you’re weighting the people who are going to vote and not the people who aren’t going to vote, things like that … you just make up a fudge factor for how wrong the polls were last time, and assume that if you just add that fudge factor in then you don’t have to fix all the things that went wrong on a more fundamental level, that seems guaranteed to keep being wrong for as long as you’re doing it.
Again I won’t say you’re wrong about how they’re going about it. (And, I’m not saying it’s necessarily easy to do or anything.) But I think you’ve accurately captured the flaw in just adding a fudge factor and then assuming you’ll be able to learn anything from the now-corrected-for-sure-until-next-time-when-we-add-in-how-wrong-we-were-this-time answers.
That’s the thing, we don’t know how they’re correcting for it, and if it is just a fudge factor. The issue is there’s more confounding factors that anyone could list which could be the culprit here.
A fudge factor is easy, but the wrong solution here. But the right solution is incredibly complex and difficult to even identify. In my field we can get away with using a timer instead of a precise calculation sometimes. That really isn’t an option for polls. I don’t favor the people trying to fix the models.