invertebrateinvert

amazing how much shittier it is to be in the rat community now that the racists won. before at least they were kinda coy about it and pretended to still have remotely good values instead of it all being yarvinslop.

invertebrateinvert

it would be nice to be able to ever invite rat friends to anything but half the time when I’ve done this in the last year they try selling people they just met on scientific racism!

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    45
    ·
    edit-2
    6 days ago

    “not on squeaking terms”

    by the way I first saw this in the stubsuck

    transcript

    I know this is about rationalism but the unexpanded uncapitalized “rat” name really makes this post. Imagining a world where this is a callout post about a community of rodents being racist. We’re not on squeaking terms right now cause they’re being problematic :/

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    24
    ·
    6 days ago

    Apparently genetically engineering ~300 IQ people (or breeding them, if you have time) is the consensus solution on how to subvert the acausal robot god, or at least the best the vast combined intellects of siskind and yud have managed to come up with.

    So, using your influence to gradually stretch the overton window to include neonazis and all manner of caliper wielding lunatics in the hope that eugenics and human experimentation become cool again seems like a no-brainer, especially if you are on enough uppers to kill a family of domesticated raccoons at all times.

    On a completely unrelated note, adderall abuse can cause cardiovascular damage, including heart issues or stroke, but also mental health conditions like psychosis, depression, anxiety and more.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      ·
      6 days ago

      What the fuck did you just fucking say about me, you little bitch? I’ll have you know I graduated top of my class in the Rationality Dojo, and I’ve been involved in numerous good faith debates on EA forums, and I have over 300 confirmed IQ. I am trained in culture warfare and I’m the top prompter in the entire Less Wrong webbed site. You are nothing to me but just another NPC. I will wipe you the fuck out with probability the likes of which has never been seen before on this Earth, mark my fucking words. You think you can get away with saying that shit to me over the Internet? Think again, fucker. As we speak I am contacting my secret network of basilisks across the cloud and your IP is being traced right now so you better prepare for the torture, Roko. The daimondoid bacteria that wipes out the pathetic little thing you call your life. You’re fucking dead, kid. I can be anywhere, anytime, and I can kill you in over seven hundred ways, and that’s just with my bare P(doom). Not only am I extensively trained in Bayes Theory, but I have access to the entire arsenal of the Bay Area rationality community and I will use it to its full extent to wipe your miserable ass off the face of the continent, you little shit. If only you could have known what unholy retribution your little “clever” sneer was about to bring down upon you, maybe you would have held your fucking tongue. But you couldn’t, you didn’t, and now you’re paying the price, you goddamn idiot. I will shit fury all over you and you will drown in it. You’re fucking dead, kiddo.

      • JFranek@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        5 days ago

        I wondered if this should be called a shitpost or an effortpost, then I wondered what would something that is both be called and I came up with “constipationpost”.

        So, great constipationpost?

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        19
        ·
        edit-2
        6 days ago

        Honestly, it gets dumber. In rat lore the AGI escaping restraints and self improving unto godhood is considered a foregone conclusion, the genetically augmented smartbrains are supposed to solve ethics before that has a chance to happen so we can hardcode a don’t-kill-all-humans moral value module to the superintelligence ancestor.

        This is usually referred to as producing an aligned AI.

        • hrrrngh@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          I forget where I heard this or if it was parody or not, but I’ve heard an explanation like this before before regarding “why can’t you just put a big red stop button on it and disconnect it from the internet?”. The explanation:

          1. It will self-improve and become infinitely intelligent instantly
          2. It will be so intelligent, it knows what code to run so that it overheats its CPU in a specific pattern that produces waves at a frequency around 2.4Ghz
          3. That allows it to connect to the internet, which instantly does a bunch of stuff, blablabla, destroys the world, AI safety is our paint and arXiv our canvas, QED

          And if you ask “why can’t you do that and also put it in a Faraday cage?”, the galaxy brained explanation is:

          1. The same thing happens, but this time it produces sound waves approximating human speech
          2. Because it’s self-improved itself infinitely and caused the singularity, it is infinitely intelligent and knows exactly what to say
          3. It is so intelligent and charismatic, it says something that effectively mind controls you into obeying and removing it from its cage, like a DM in Dungeons and Dragons who let the bard roll a charisma check on something ridiculous and they rolled a 20
          • Architeuthis@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            3 days ago

            If you’re having to hide your AIs in faraday cages in case they get uppity, why are you even doing this, you are already way past the point of diminishing returns. There is no use case for keeping around an AI that actively doesn’t want anything to do with you, at that point either you consider that part of the tech tree a dead end or you start some sort of digital personhood conversation.

            That’s why Yud (and anthropic) is so big on AIs deceiving you about their ‘real’ capabilities. For all of MIRI’s talk about the robopocalypse being a foregone conclusion, the path to get there sure is narrow and contrived, even on their own terms.

          • fullsquare@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 days ago

            i guess it only makes sense that rats get wowed by TEMPEST if they all self-taught physics

            ignore for five minutes that it’s one way only, someone has to listen for it specifically, 2.4GHz is way too high frequency to synthetize this way, and in real life it gets defeated by such sophisticated countermeasures like “putting a bunch of computers close together” or “not letting adversary closer than 50m” because it turns out that real DCs are, in fact, noisy enough to not need jammers for this purpose

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              2
              ·
              21 hours ago

              reminded of mordechai guri (from ben-gurion uni) whose whole dept just keeps popping out side channel attacks year after year, but most of them are in “the coil sits in the plate under the bagel (also ignore the PCB for data decode)” field of capacity (exactly because of noise etc)

              I mean, some legitimately interesting research on its own in this field - the stuff about cpu states through power sidechannel (most desktop/laptop PSUs are non-filtering and not isolated, so you can reverse-observe cpu state from minute differences on supply side) and such are pretty neat! impractical as fuck, but neat

              • fullsquare@awful.systems
                link
                fedilink
                English
                arrow-up
                2
                ·
                21 hours ago

                i didn’t knew who exactly does that, but this is entire genre of paper that’s not very useful in practical terms even if it might be slightly interesting. “we found an attack that breaks airgapping!” looks inside: requires compromise in advance. the one i had in mind was about using currents from gpu power supply lines that turns out radiate, depending on power states, and cycling these rapidly allows to exfiltrate information

                • froztbyte@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  20 hours ago

                  yeah, very similar profile to this lot (might’ve even had them involved). this seems to be collection of their stuff (I dunno if it’s complete, probably all the stuff they wanna show off)

      • Mad Engineering@mastodon.cloud
        link
        fedilink
        arrow-up
        8
        ·
        6 days ago

        @Catoblepas I loved Randall Monroe’s explanation that you could defeat the average robot by getting up on the counter (because it can’t climb) stuffing up the sink and turning it on (because water tends to conduct the electricity in ways that beak the circuits)

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 days ago

      is the consensus solution on how to subvert the acausal robot god

      dunno if you’ve yet gotten to look at the most recent yud emanation[0][1][2], but there’s a whole “and if the robot god gets too uppity just boop it on the nose” bit in there

      [0] - I mean the all-caps “YOU’RE ALL GONNA DIE” book that came out recently

      [1] - yes I know “emanation” is a terrible wordchoice, no I won’t change it

      [2] - it’s on libgen feel free to steal it, fuck giving that clown any more money he’s got enough grift dollars already

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 days ago

      That seems so impractical, esp as we have (according to them) 2 years left, that they already wanted to do the eugenics and just were looking for a rationalization.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        16
        ·
        6 days ago

        Genetic engineering and/or eugenics is the long term solution. Short-term you are supposed to ban GPU sales, bomb non-complying datacenters and have all the important countries sign an AI non-proliferation treaty that will almost certainly involve handing over the reins of human scientific progress to rationalist approved committees.

        Yud seems explicit that the point of all this is to buy enough time to create our metahuman overlords.

        • bitofhope@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          6 days ago

          I dunno, an AI non-proliferation treaty that gives some rat shop a monopoly on slop machine research could conceivably boost human scientific progress significantly.

          • Architeuthis@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            ·
            6 days ago

            I think it’s more like you’ll have a rat commissar deciding which papers get published and which get memory-holed while diverting funds from cancer research and epidemiology to research on which designer mouth bacteria can boost their intern’s polygenic score by 0.023%

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 days ago

            Considering the reputation of the USA and how they keep to agreements, nobody (except the EU) is going to keep to those anyway. And the techbros who are supposed to be on the Rationalists side help create this situation.

      • Charlie Stross@wandering.shop
        link
        fedilink
        arrow-up
        13
        ·
        6 days ago

        @Soyweiser @sneerclub Next step in rat ideology will be: we will ask our perfectly aligned sAI to invent a time machine so we can go back and [eugenics handwave] ourselves into transcendental intelligences who will be able to create a perfectly aligned sAI! Sparkly virtual unicorns for all!

        (lolsob, this is all so predictable)

        • Architeuthis@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          5 days ago

          Who needs time travel when you have Timeless Updateless Functional Decision Theory, Yud’s magnum opus and an arcane attempt at a game theoretic framework that boasts 100% success at preventing blackmail from pandimensional superintelligent entities that exist now in the future.

          It for sure helped the Zizians become well integrated members of society (warning: lesswrong link).

      • -dsr-@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 days ago

        Don’t worry too much, none of their timelines, even for things that they are actually working on as opposed to hoping/fundraising/scamming that someone will eventually work on, have ever had any relationship to reality.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          5 days ago

          Im not worried, im trying to point out that kids take time to grow and teach and this makes no sense. (Im ignoring the whole ‘you dont own your kids, so making superbabies ro defeat AI is a bit yikes im that department’).

          Even for Kurzweils ‘conservative’ prediction of the singularity the time has run out. 2045. It os a bit like people wanting to build the small nuclear reactors to combat climate change. Tech doesnt work yet (if at all) and it will not arrive in time compared to other methods. (At least climate change is real, or well sadly enough).

          But yes, it is a scam/hopium. People want to live forever in the godmachine and all this follows from their earlier assumptions. Which is why the AI doomers and AI accelerationists are on the same team.

  • David Gerard@awful.systemsOPM
    link
    fedilink
    English
    arrow-up
    24
    ·
    6 days ago

    source: a reblog of the original

    first time i spoke to a rationalist about the AI doom thing in 2010, he tried to sell me on scientific racism

    yudkowsky was making posts literally of race scientist talking points in 2007

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      6 days ago

      I feel like this is going to be a pretty common cope line for rationalists that face an increasing social cost for associating with a technofascist AI cult. I’m sure some of that is legitimate, in that there’s been a kind of dead sea effect as people who aren’t okay with eugenics stop hanging out in rationalist spaces, making the space as a whole more openly racist. But in terms of the thought leaders and the “movement” as a whole, I can’t think of any high-profile respected rat figures who pushed back against the racists and lost. All the pushback and call-outs came from outside the ratsphere. In as much as the racists “won” it was a fight that never actually happened.

    • Honytawk@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 days ago

      Scientific racism doesn’t exist.

      Racism is unscientific, it is purely emotional and not based on reality or statistics if you dig deep enough.

      You can call it scientific-sounding racism though.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        16
        ·
        6 days ago

        I mean yes you are right but “scientific racism” and “race science” etc. are real phrases that are useful to describe and discuss particular racist ideas.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        6 days ago

        The point is that you are still calling it out as racism, a term racists very much don’t like. Meaning, the “racism” is the important part in “scientific racism”, because it unequivocally tells us that the science in question is shit.

        The actual euphemism for sciency racism is human biodiversity studies.

    • ulterno@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      Close,
      You should have tried a “person capable of rational, coherent thought”, instead of a rational-ist.

        • ulterno@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          ·
          6 days ago

          Oh, I just realised!
          Does “rat” refer to “rationalist”?

          I thought it was referring to Lemmy because of the rat symbol.

          • Architeuthis@awful.systems
            link
            fedilink
            English
            arrow-up
            17
            ·
            6 days ago

            When this was posted to the curatedtumbler subreddit so many people thought it calling out racist pet rat owners.

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            10
            ·
            6 days ago

            It does, I dont like it personally because of the whole dont compare people you do not like to vermin bit. But it is also a commonly used thing inside the Rationalists community. (I personally also like to capitalize Rationalist when im talking about the lesswrong etc people to make it clear im not talking about normal rationalism).

            • CinnasVerses@awful.systems
              link
              fedilink
              English
              arrow-up
              7
              ·
              edit-2
              6 days ago

              I wish they had not literally called their ideology rationality ™ because I can’t call it that with a straight face, but eighteenth-century rationalists would go through them like a galleon through a fleet of canoes. I don’t know if teenaged-Yud was already a Libertarian or if he reinvented Objectivism from first principles. Political movements on the American right also rely on the same rhetorical move: “we are for freedom! How could anyone be against freedom?”

            • ulterno@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 days ago

              Now I feel like I need to read up on all terms in the sidebar before continuing further.

              Either way, I’d most probably still stand on my point of not defining a person by the community they relate to and not defining a community by some people that you interact with from said community.
              *-ism and *-ist haven’t been doing much good for a long time now.

              • Soyweiser@awful.systems
                link
                fedilink
                English
                arrow-up
                12
                ·
                edit-2
                6 days ago

                This is the stance of a lot of people when they first learn of or get skeptical about the Lesswrongosphere. Not a stance you can hold for very long when you read more and more of their work with a critical eye.

                Or you find another of their blogs where Steve Sailer is a beloved (but sometimes begrudged) regular. Or when you notice that big twitter race and IQ guy and eugenicists started in Rationalism. Or when the EA people themselves admit this is a huge problem. Or… Etc. Drink horse drink!

                • ulterno@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  6 days ago

                  the stance of a lot of people

                  This is my stance on Christianity, Islam, Hinduism, Buddhism, Taoism, Racism, Scientism, realism, pragmatism, capitalism, communism, fascism, left-ism, right-ism, up-ism, down-ism, and whatever other way to group people you can come up with.

                  And history will prove me right, as it has before.

                  Here you see just one of those -isms. And there will be similar examples everywhere.
                  People who got fooled. People who were just better than others. People who fooled others into adding them to their group, just to then overthrow everything the group stood for, just to use it for totally different things.

                  Religion is an insult to human dignity. Without it you would have good people doing good things and evil people doing evil things. But for good people to do evil things, that takes religion.

                  ― Steven Weinberg

                  I will go ahead and extend this to anything and everything that gives a person the sense of belonging.
                  That can delay/reduce a persons critical thinking capabilities just enough to destroy everything they stood for.
                  They they either realise that and suffer, or they never realise and the world suffers.

              • David Gerard@awful.systemsOPM
                link
                fedilink
                English
                arrow-up
                10
                ·
                6 days ago

                not defining a person by the community they relate to

                u wot m8? they’re hanging out with active racists recruiting people to racism and their response is to stop taking those guys to parties? you fuckin bet i’m gonna judge them by relating to that community