Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 days ago

    Some Rat content got shared on HN, and the rats there are surprised and outraged not everyone shares their deathly fear of the AI god:

    https://news.ycombinator.com/item?id=45451971

    “Stop bringing up Roko’s Basilisk!!!” they sputter https://news.ycombinator.com/item?id=45452426

    “The usual suspects are very very worried!!!” - https://news.ycombinator.com/item?id=45452348 (username 'reducesuffering checks out!)

    ``Think for at least 5 seconds before typing.‘’ - on the subject of pulling the plug on a hostile AI - https://news.ycombinator.com/item?id=45452743

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      The original article is a great example of what happens when one only reads Bostrom and Yarvin. Their thesis:

      If you claim that there is no AI-risk, then which of the following bullets do you want to bite?

      1. If a race of aliens with an IQ of 300 came to Earth, that would definitely be fine.
      2. There’s no way that AI with an IQ of 300 will arrive within the next few decades.
      3. We know some special property that AI will definitely have that will definitely prevent all possible bad outcomes that aliens might cause.

      Ignoring that IQ doesn’t really exist beyond about 160-180 depending on population choice, this is clearly an example of rectal philosophy that doesn’t stand up to scrutiny. (1) is easy, given that the people verified to be high-IQ are often wrong, daydreaming, and otherwise erroring like humans; Vos Savant and Sidis are good examples, and arguably the most impactful high-IQ person, Newton, could not be steelmanned beyond Sherlock Holmes: detached and aloof, mostly reading in solitude or being hedonistic, occasionally helping answer open questions but usually not even preventing or causing crimes. (2) is ignorant of previous work, as computer programs which deterministically solve standard IQ tests like RPM and SAT have been around since the 1980s yet are not considered dangerous or intelligent. (3) is easy; linear algebra is confined in the security sense, while humans are not, and confinement definitely prevents all possible bad outcomes.

      Frankly I wish that they’d understand that the capabilities matter more than the theory of mind. Fnargl is one alien at 100 IQ, but he has a Death Note and goldlust, so containing him will almost certainly result in deaths. Containing a chatbot is mostly about remembering how systemctl works.

      • blakestacey@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        1 day ago

        If a race of aliens with an IQ of 300 came to Earth

        Oh noes, the aliens scored a meaningless number on the eugenicist bullshit scale, whatever shall we do

        Next you’ll be telling me that the aliens can file their TPS reports in under 12 parsecs

    • BlueMonday1984@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 day ago

      ``Think for at least 5 seconds before typing.‘’ - on the subject of pulling the plug on a hostile AI - https://news.ycombinator.com/item?id=45452743

      Read that last one against my better judgment, and found a particularly sneerable line:

      And in this case we’re talking about a system that’s smarter than you.

      Now, I’m not particularly smart, but I am capable of a lot of things AI will never achieve. Like knowing something is true, or working out a problem, or making something which isn’t slop.

      Between this rat and Saltman spewing similar shit on Politico, I have seen two people try to claim text extruders are smarter than living, thinking human beings. Saltman I can understand (he is a monorail salesman who lies constantly), but seeing someone who genuinely believes this shit is just baffling. Probably a consequence of chatbots destroying their critical thinking and mental acuity.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        There have been a lot of cases in history of smart people being bested by the dumbest people around who just had more guns/a gun/copious amounts of meth/a stupid idea but they got lucky once, etc.

        I mean, if they are so smart, why are they stuck in a locker?

        • blakestacey@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          12
          ·
          1 day ago

          It’s practically a proverb that you don’t ask a scientist to explain how a “psychic” is pulling off their con, because scientists are accustomed to fair play; you call a magician.