A new set of Times/Siena polls, including one with The Philadelphia Inquirer, reveal an erosion of support for the president among young and nonwhite voters upset about the economy and Gaza.
If you’ve been following the polling there is nothing different or unique about this one. It’s consistent with pretty much all polling over the past 400 days. Biden is losing. Polling is definitely still broken, but it’s consistent. There is no fuckery.
Biden needs to be up by 4-12 in those states if he wants to win.
See my posts in !data_vizualisations@lemmy.world . I make a map of the offset in polling Biden needs to win a given state based on the fact that polls consistently overestimate how well Biden will do, and underestimate how well Trump will do.
When you see these poll numbers, you should subtract 4 for Biden, and add 8 for Trump. That was the offsets we observed from the 2020 election.
So keeping in mind data you already have about Trump, Biden, polling and it’s departure from real election results, it’s not even a question. Mortgage you house and out all your money on Trump to win. You have a differential polling error of 12 points in a Biden Trump head to head. Biden needs to be in the mid to high fifties across the board to have a chance.
He’s in the low forties.
If you don’t end up clicking the link:
Relative polling error for Biden V Trump, 2020.
If you’ve been following the polling there is nothing different or unique about this one.
They posted their methodology and to me, as an unqualified lay person, it’s clearly shit, and there’s no reason to think it’ll yield anything even resembling an accurate picture of how people are going to vote in the election. It’s not surprising to me that recent polls in general tend to be as inaccurate as you’re saying they are.
I would be interested to go back and look at some of the polling that led up to recent special elections where Democrats won, and see how the poll results compared with the election results – if you follow polling in detail (which again, I don’t), do you happen to know where I could look to find that?
They posted their methodology and to me, as an unqualified lay person (…)
So like, if you know the above statement to be true, that’s exactly where you should stop in your reasoning. This is something that I find Americans to be guilty if constantly, which is to have the humility to understand that they shouldn’t have an opinion, and the proceed to arrogantly have the opinion they just acknowledged they shouldn’t have. I think it’s a deeply human thing, that we evolved to have to deal with missing information and so our brain fills in gaps and gives us convincing narratives. However, you have to resist the tendency when you know you really don’t know: and even more so when your beliefs go against what the data is.
If you can find me some sources of data on special elections, I’ll happily analyze it for you. I think it would be interesting if nothing else to see the offset. I’m not on my desktop machine, but I’ll give you some sources for data since you asked.
Surely as a qualified non lay person you’ll be able to do a detailed takedown of all the criticism I arrived at for the poll’s methodology from like 2 minutes of looking, instead of just making a broad assertion that if the polling was wrong by a certain amount in a previous year we should add that amount to this year’s polling to arrive at reality, and that’s all that’s needed and then this year’s corrected poll will always be accurate.
Because to me, that sounds initially plausible but then when you look at it for a little bit longer you say, oh wait hang on, if that was all that was needed the professional pollsters could just do that, and their answers would always be right. And you wouldn’t need to look closely at the methodology at all, just trust that “it’s a poll” means it’s automatically equal to every other poll (once you apply the magic correction factor.)
To me that sounds, on close scientific examination, like a bunch of crap once you think about it for a little bit. But what do I know. I’m unqualified. I’ll wait for you to educate me.
I think the right answer is to do what you described, in the aggregate. Don’t do it on a pollster to pollster basis, do it at the state level, across all polls. You don’t do this as a pollster because that isn’t really what you are trying to to model with a poll, and polls being wrong or uncertain is just a part of the game.
So it’s important to not conflate polling with the meta-analysis of polling.
I’m not so much interested in polls or polling but in being able to use them as a source of data to model outcomes that individually they may not be able to to predict. Ultimately a poll needs to be based on the data it samples from to be valid. If there is something fundamentally flawed in the assumptions that form the basis of this, there isn’t that much you can do to fix it with updates to methods.
the -4, 8 spread is the prior I’m walking into this election year with. That inspire of their pollsters best efforts to come up with a unbiased sample, they can’t predict the election outcome is fine. We can deal with that in the aggregate. This is very similar to Nate Silvers approach.
If there is something fundamentally flawed in the assumptions that form the basis of this, there isn’t that much you can do to fix it with updates to methods.
My only critique is that I don’t think 2020 skew is valid anymore. After Dobbs, the landscape seems to have significantly changed. 2022 was predicted to favor Republicans by a strong margin, but it ended up being a tie pretty much. And a lot of special elections have had surprising results too.
My personal opinion is that polling methodology may have overcorrected for 2020, and we’re getting a picture now that’s skewed right, versus left from beforehand.
It’s really hard to say though. There weren’t a lot of great polls to start with in 2022, and special elections don’t have significant polling either. It’s a weird position where the only good data set we have is from 2020, but there have been so many changes in the national environment that we have reason to doubt the skews from 2020 are still valid. But at the same time, what else do we have? Vibes and feelings and anecdotes. And the engineer in me dislikes dismissing data in favor of vibes. It’s important to consider still I think, because none of this is infallible. But I honestly couldn’t tell you what the “right” outlook to have is. Maybe I’m onto something, but maybe I’m just letting optimism bleed into my better judgement.
My personal opinion is that polling methodology may have overcorrected for 2020, and we’re getting a picture now that’s skewed right, versus left from beforehand.
I won’t say that you’re wrong about what the pollsters are doing – but to me this strikes me as very obviously the wrong way to do it.
If you find out your polls were wrong, and then instead of digging into detail as to what exactly went wrong, and then fixing the methodology going forward, using non-phone polls, doing a more accurate calculation to make sure you’re weighting the people who are going to vote and not the people who aren’t going to vote, things like that … you just make up a fudge factor for how wrong the polls were last time, and assume that if you just add that fudge factor in then you don’t have to fix all the things that went wrong on a more fundamental level, that seems guaranteed to keep being wrong for as long as you’re doing it.
Again I won’t say you’re wrong about how they’re going about it. (And, I’m not saying it’s necessarily easy to do or anything.) But I think you’ve accurately captured the flaw in just adding a fudge factor and then assuming you’ll be able to learn anything from the now-corrected-for-sure-until-next-time-when-we-add-in-how-wrong-we-were-this-time answers.
That’s the thing, we don’t know how they’re correcting for it, and if it is just a fudge factor. The issue is there’s more confounding factors that anyone could list which could be the culprit here.
A fudge factor is easy, but the wrong solution here. But the right solution is incredibly complex and difficult to even identify. In my field we can get away with using a timer instead of a precise calculation sometimes. That really isn’t an option for polls. I don’t favor the people trying to fix the models.
I mean if we’re stepping off the data into editorialism, Trump out performed all other Republicans in 2020, like he also did in 2016. As well, Trump endorsed candidates struggled in 2018, and 2022, and special elections. My read of this evidence and I’ve seen it suggested elsewhere, is that whatever property it is that causes Trump to consistently over perform isn’t transitive. So evaluating how well Trump will perform against how well Republicans are performing is misguided. You should evaluate candidates individually, and that would agree with their performance.
Also, this is one poll. The aggregate of polling agrees with this one poll. The minor methodological changes they make from year to year are infact extremely minor and they are doing the appropriate statistical accounting afaict. There is nothing weird or wonky about these polls: Biden is just performing very very poorly. I’ve been saying this for months to an onslaught of downvotes from people who simply don’t want to believe this to be the case.
Finally, I’ll argue that the ‘right’ outlook is always the one that aligns most closely with the data. We should believe stories we tell about data less than data itself. There is nothing to suggest that this election will really be anything that different than the 3 previous, and in terms of landscapes, the best proxy appears to be 2016 in terms of contested states. You should believe the data that is telling you that Joe Biden is losing this election. Biden has been setting up to lose the upper Midwest since December. These are the same states Hillary lost.
maybe I’m just letting optimism bleed into my better judgement
I agree. It’s also what the political pundit class did when they completely wiffed on 2016, and it’s what they’re doing right now. 90% of Lemmy also agrees with your sentiment, and in both Lemmy’s and the punditry’s refusal to be critical of Biden, to drag him towards more popular policies, they’re setting Trump up for victory.
I don’t really disagree with anything you’ve said, it’s a very valid take – and you’re spot on about underestimating Trump but overestimating Republican cohorts in polls. My only qualifier there is that we don’t know if 2022 models were overtuned for only Republicans, or also Trump support.
I don’t know if we can take 2016 as representative of our current dynamic. I think it’s certainly more representative than 2020 however, but shifting populations and world/domestic events have had massive impacts.
In short? I don’t know which outlook is more accurate. What I can reasonably assert though is that the reality will be somewhere between the less optimistic and the more optimistic outlooks. Taking these poll results at face value is probably the better strategic option anyway to create pressure to go vote and campaign.
I agree though, we shouldn’t be totally dismissive of these polls. It’s fine to scrutinize and question them like I’ve said, but it shouldn’t take away from the very real possibility that these are correct. Oddities don’t create impossibilities.
If you’ve been following the polling there is nothing different or unique about this one. It’s consistent with pretty much all polling over the past 400 days. Biden is losing. Polling is definitely still broken, but it’s consistent. There is no fuckery.
Biden needs to be up by 4-12 in those states if he wants to win.
See my posts in !data_vizualisations@lemmy.world . I make a map of the offset in polling Biden needs to win a given state based on the fact that polls consistently overestimate how well Biden will do, and underestimate how well Trump will do.
When you see these poll numbers, you should subtract 4 for Biden, and add 8 for Trump. That was the offsets we observed from the 2020 election.
So keeping in mind data you already have about Trump, Biden, polling and it’s departure from real election results, it’s not even a question. Mortgage you house and out all your money on Trump to win. You have a differential polling error of 12 points in a Biden Trump head to head. Biden needs to be in the mid to high fifties across the board to have a chance.
He’s in the low forties.
If you don’t end up clicking the link: Relative polling error for Biden V Trump, 2020.
They posted their methodology and to me, as an unqualified lay person, it’s clearly shit, and there’s no reason to think it’ll yield anything even resembling an accurate picture of how people are going to vote in the election. It’s not surprising to me that recent polls in general tend to be as inaccurate as you’re saying they are.
I would be interested to go back and look at some of the polling that led up to recent special elections where Democrats won, and see how the poll results compared with the election results – if you follow polling in detail (which again, I don’t), do you happen to know where I could look to find that?
So like, if you know the above statement to be true, that’s exactly where you should stop in your reasoning. This is something that I find Americans to be guilty if constantly, which is to have the humility to understand that they shouldn’t have an opinion, and the proceed to arrogantly have the opinion they just acknowledged they shouldn’t have. I think it’s a deeply human thing, that we evolved to have to deal with missing information and so our brain fills in gaps and gives us convincing narratives. However, you have to resist the tendency when you know you really don’t know: and even more so when your beliefs go against what the data is.
If you can find me some sources of data on special elections, I’ll happily analyze it for you. I think it would be interesting if nothing else to see the offset. I’m not on my desktop machine, but I’ll give you some sources for data since you asked.
Surely as a qualified non lay person you’ll be able to do a detailed takedown of all the criticism I arrived at for the poll’s methodology from like 2 minutes of looking, instead of just making a broad assertion that if the polling was wrong by a certain amount in a previous year we should add that amount to this year’s polling to arrive at reality, and that’s all that’s needed and then this year’s corrected poll will always be accurate.
Because to me, that sounds initially plausible but then when you look at it for a little bit longer you say, oh wait hang on, if that was all that was needed the professional pollsters could just do that, and their answers would always be right. And you wouldn’t need to look closely at the methodology at all, just trust that “it’s a poll” means it’s automatically equal to every other poll (once you apply the magic correction factor.)
To me that sounds, on close scientific examination, like a bunch of crap once you think about it for a little bit. But what do I know. I’m unqualified. I’ll wait for you to educate me.
I think the right answer is to do what you described, in the aggregate. Don’t do it on a pollster to pollster basis, do it at the state level, across all polls. You don’t do this as a pollster because that isn’t really what you are trying to to model with a poll, and polls being wrong or uncertain is just a part of the game.
So it’s important to not conflate polling with the meta-analysis of polling.
I’m not so much interested in polls or polling but in being able to use them as a source of data to model outcomes that individually they may not be able to to predict. Ultimately a poll needs to be based on the data it samples from to be valid. If there is something fundamentally flawed in the assumptions that form the basis of this, there isn’t that much you can do to fix it with updates to methods.
the -4, 8 spread is the prior I’m walking into this election year with. That inspire of their pollsters best efforts to come up with a unbiased sample, they can’t predict the election outcome is fine. We can deal with that in the aggregate. This is very similar to Nate Silvers approach.
On this, we 100% agree.
This is quite interesting, thanks for sharing!
My only critique is that I don’t think 2020 skew is valid anymore. After Dobbs, the landscape seems to have significantly changed. 2022 was predicted to favor Republicans by a strong margin, but it ended up being a tie pretty much. And a lot of special elections have had surprising results too.
My personal opinion is that polling methodology may have overcorrected for 2020, and we’re getting a picture now that’s skewed right, versus left from beforehand.
It’s really hard to say though. There weren’t a lot of great polls to start with in 2022, and special elections don’t have significant polling either. It’s a weird position where the only good data set we have is from 2020, but there have been so many changes in the national environment that we have reason to doubt the skews from 2020 are still valid. But at the same time, what else do we have? Vibes and feelings and anecdotes. And the engineer in me dislikes dismissing data in favor of vibes. It’s important to consider still I think, because none of this is infallible. But I honestly couldn’t tell you what the “right” outlook to have is. Maybe I’m onto something, but maybe I’m just letting optimism bleed into my better judgement.
All I know is that I don’t know.
I won’t say that you’re wrong about what the pollsters are doing – but to me this strikes me as very obviously the wrong way to do it.
If you find out your polls were wrong, and then instead of digging into detail as to what exactly went wrong, and then fixing the methodology going forward, using non-phone polls, doing a more accurate calculation to make sure you’re weighting the people who are going to vote and not the people who aren’t going to vote, things like that … you just make up a fudge factor for how wrong the polls were last time, and assume that if you just add that fudge factor in then you don’t have to fix all the things that went wrong on a more fundamental level, that seems guaranteed to keep being wrong for as long as you’re doing it.
Again I won’t say you’re wrong about how they’re going about it. (And, I’m not saying it’s necessarily easy to do or anything.) But I think you’ve accurately captured the flaw in just adding a fudge factor and then assuming you’ll be able to learn anything from the now-corrected-for-sure-until-next-time-when-we-add-in-how-wrong-we-were-this-time answers.
That’s the thing, we don’t know how they’re correcting for it, and if it is just a fudge factor. The issue is there’s more confounding factors that anyone could list which could be the culprit here.
A fudge factor is easy, but the wrong solution here. But the right solution is incredibly complex and difficult to even identify. In my field we can get away with using a timer instead of a precise calculation sometimes. That really isn’t an option for polls. I don’t favor the people trying to fix the models.
I mean if we’re stepping off the data into editorialism, Trump out performed all other Republicans in 2020, like he also did in 2016. As well, Trump endorsed candidates struggled in 2018, and 2022, and special elections. My read of this evidence and I’ve seen it suggested elsewhere, is that whatever property it is that causes Trump to consistently over perform isn’t transitive. So evaluating how well Trump will perform against how well Republicans are performing is misguided. You should evaluate candidates individually, and that would agree with their performance.
Also, this is one poll. The aggregate of polling agrees with this one poll. The minor methodological changes they make from year to year are infact extremely minor and they are doing the appropriate statistical accounting afaict. There is nothing weird or wonky about these polls: Biden is just performing very very poorly. I’ve been saying this for months to an onslaught of downvotes from people who simply don’t want to believe this to be the case.
Finally, I’ll argue that the ‘right’ outlook is always the one that aligns most closely with the data. We should believe stories we tell about data less than data itself. There is nothing to suggest that this election will really be anything that different than the 3 previous, and in terms of landscapes, the best proxy appears to be 2016 in terms of contested states. You should believe the data that is telling you that Joe Biden is losing this election. Biden has been setting up to lose the upper Midwest since December. These are the same states Hillary lost.
I agree. It’s also what the political pundit class did when they completely wiffed on 2016, and it’s what they’re doing right now. 90% of Lemmy also agrees with your sentiment, and in both Lemmy’s and the punditry’s refusal to be critical of Biden, to drag him towards more popular policies, they’re setting Trump up for victory.
I don’t really disagree with anything you’ve said, it’s a very valid take – and you’re spot on about underestimating Trump but overestimating Republican cohorts in polls. My only qualifier there is that we don’t know if 2022 models were overtuned for only Republicans, or also Trump support.
I don’t know if we can take 2016 as representative of our current dynamic. I think it’s certainly more representative than 2020 however, but shifting populations and world/domestic events have had massive impacts.
In short? I don’t know which outlook is more accurate. What I can reasonably assert though is that the reality will be somewhere between the less optimistic and the more optimistic outlooks. Taking these poll results at face value is probably the better strategic option anyway to create pressure to go vote and campaign.
I agree though, we shouldn’t be totally dismissive of these polls. It’s fine to scrutinize and question them like I’ve said, but it shouldn’t take away from the very real possibility that these are correct. Oddities don’t create impossibilities.