I did fake Bayesian math with some plausible numbers, and found that if I started out believing there was a 20% per decade chance of a lab leak pandemic, then if COVID was proven to be a lab leak, I should update to 27.5%, and if COVID was proven not to be a lab leak, I should stay around 19-20%

This is so confusing: why bother doing “fake” math? How does he justify these numbers? Let’s look at the footnote:

Assume that before COVID, you were considering two theories:

  1. Lab Leaks Common: There is a 33% chance of a lab-leak-caused pandemic per decade.
  2. Lab Leaks Rare: There is a 10% chance of a lab-leak-caused pandemic per decade.

And suppose before COVID you were 50-50 about which of these were true. If your first decade of observations includes a lab-leak-caused pandemic, you should update your probability over theories to 76-24, which changes your overall probability of pandemic per decade from 21% to 27.5%.

Oh, he doesn’t, he just made the numbers up! “I don’t have actual evidence to support my claims, so I’ll just make up data and call myself a ‘good Bayesian’ to look smart.” Seriously, how could a reasonable person have been expected to be concerned about lab leaks before COVID? It simply wasn’t something in the public consciousness. This looks like some serious hindsight bias to me.

I don’t entirely accept this argument - I think whether or not it was a lab leak matters in order to convince stupid people, who don’t know how to use probabilities and don’t believe anything can go wrong until it’s gone wrong before. But in a world without stupid people, no, it wouldn’t matter.

Ah, no need to make the numbers make sense, because stupid people wouldn’t understand the argument anyway. Quite literally: “To be fair, you have to have a really high IQ to understand my shitty blog posts. The Bayesian math is is extremely subtle…” And, convince stupid people of what, exactly? He doesn’t say, so what was the point of all the fake probabilities? What a prick.

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    11 months ago

    Scott is saying essentially that “one data point doesn’t influence the data as a whole that much” (usually true)… “so therefore you don’t need to change your opinions when something happens” which is just so profoundly stupid. Just so wrong on so many levels. It’s not even correct Bayesianism!

    (if it happens twice in a row, yeah, that’s weird, I would update some stuff)

    ??? Motherfucker have you heard of the paradox of the heap? What about all that other shit you just said?

    What is this really about, Scott???

    Do I sound defensive about this? I’m not. This next one is defensive. [line break] I’m part of the effective altruist movement.

    OH ok. I see now. I mean I’ve always seen, really, that you and your friends work really hard to come up with ad hoc mental models to excuse every bit of wrongdoing that pops up in any of the communities you’re in.

    You definitely don’t get this virtue by updating maximally hard in response to a single case of things going wrong. […] The solution is not to update much on single events, even if those events are really big deals.

    Again, this isn’t correct Bayesian updating. The formula is the formula. Biasing against recency is not in it. And that’s just within Bayesian reasoning!

    In a perfect world, people would predict distributions beforehand, update a few percent on a dramatic event, but otherwise continue pursuing the policy they had agreed upon long before.

    YEAH BECAUSE IT’S A PERFECT WORLD YOU DINGUS.

    • Tar_Alcaran@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Complete sidenote, but I hate how effective altruism has gone from “charities should spend more money on their charity and not on executive bonusses, here are the ones that don’t actually help anyone” to “I believe I will save infinity humans by colonizing mars, so you can just starve to death today”.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        I suspect a large portion of people in EA leadership were already on the latter train and posturing as the former. The former is actually kinda problematic in its own way! If a problem was solvable purely by throwing money at it, then what is the need for a charity at all?