Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    16 hours ago

    Rationalist Infighting!

    tldr; one of the MIRI aligned rationalist (Rob Bensinger) complained about how EA actually increased AI-risk long-run by promoting OpenAI and then Anthropic. Scott Alexander responded aggressively, basically saying they are entirely wrong and also they are bad at public communications! Various lesswrongers weigh in, seemingly blind to irony and hypocrisy!

    Some highlights from the quotes of the original tweets and the lesswronger comments on them:

    • Scott Alexander tries blaming Eliezer for hyping up AI and thus contributing to OpenAI in the first place. Just a reminder, Scott is one of the AI 2027 authors, he really doesn’t have room to complain about rationalist creating crit-hype.

    • Scott Alexander tries claiming SBF was a unique one off in the rationalist/EA community! (Anthropic’s leadership has been called out on the EA forums and lesswrong for a similar pattern of repeated lying)

    • Rob Bensinger is indirectly trying to claim Eliezer/MIRI has been serious forthright honest commentators on AI theory and policy, as opposed to Open-Phil/EA/Anthropic which have been “strategic” with their public communication, to the point of dishonesty.

    • habryka is apparently on the verge of crashing out? I can’t tell if they are planning on just quitting twitter or quitting their attempts at leadership within the rationalist community. Quitting twitter is probably a good call no matter what.

    • Load of tediously long posts, mired with that long-winded rationalist way of talking, full of rationalist in-group jargon for conversations and conflict resolution

    • Disagreement on whether Ilya Sutskever’s $50 billion dollar startup is going to contribute to AI safety or just continue the race to AGI.

    • Arguments over who is with the EAs vs. Open Philanthropy vs. MIRI!

    • Argument over the definition of gaslighting!

    To be clear, I agree with the complaints about EA and Anthropic, I just also think MIRI has its own similar set of problems. So they are both right, all of the rationalists are terrible at pursing their alleged nominal goals of stopping AI Doom.

    I did sympathize with one lesswronger’s comment:

    More than any other group I’ve been a part of, rationalists love to develop extremely long and complicated social grievances with each other, taking pages and pages of text to articulate. Maybe I’m just too stupid to understand the high level strategic nuances of what’s going on – what are these people even arguing about? The exact flavor of comms presented over the last ten years?

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      12 hours ago

      Old Twitter was terrible for people’s souls. I can only imagine what it is like now that the well-meaning professionals are gone and catturd and Wall Street Apes are the leading accounts.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 hours ago

        Old Twitter was terrible for people’s souls.

        It almost makes me feel sorry for the way the rationalists are still so attached to it. But they literally have two different forums (lesswrong and the EA forum), so staying on twitter is entirely their choice, they have alternatives.

        Fun fact! Over the past few years, Eliezer has deliberately cut his lesswrong posting in favor of posting on twitter, apparently (he’s made a few comments about this choice) because lesswrong doesn’t uncritically accept his ideas and nitpicks them more than twitter does. (How bad do you have to be to not even listen to critique on a website that basically loves you and take your controversial foundational premises seriously?)

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 hours ago

        I’m willing to go out on a limb and say that short-form social media in general (Twitter and imitators, Instagram, TikTok) is essentially a failed set of media. But I’ll concede that’s like cramming a Zyn pouch in my mouth while making fun of a guy chain-smoking Marlboros.

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          ·
          36 minutes ago

          I’ve read speculation that in 30-50 years people will have an attitude towards social media that we have towards cigarettes now.

          That would be really nice but that scenario feels pretty optimistic to me on a few points. For one, scientists doing research were able to overcome the lobbying influence and paid think tanks of cigarette companies. I am worried science as a public institution isn’t in good enough shape to do that nowadays. Likewise part of the push back against cigarettes included a variety of mandatory labeling and sin taxes on them, and it would take some pretty major shifts for the political will for that kind of action to be viable. Well maybe these things are viable in the EU, the US is pretty screwed.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      12 hours ago

      Bonus race pseudoscience quoted by No77e!

      There is a phenomenon in which rationalists sometimes make predictions about the future, and they seem to completely forget their other belief that we’re heading toward a singularity (good or bad) relatively soon. It’s ubiquitous, and it kind of drives me insane. Consider these two tweets:

      Richard Ngo @RichardMCNgo: Hypothesis: We’ll look back on mass migration as being worse for Europe than WW 2 was. … high-trust and honogeneous … ethno-religious fractures

      Liv Boeree: Would not be surprised if it turns out that everyone outsourcing their writing to LLms will have a similar or worse effect on IQ aslead piping in the long run

      (he shares these tweets as photos, I ain’t working harder to transcribe them or using a chatbot)