A straightforward dismantling of AI fearmongering videos uploaded by Kyle “Science Thor” Hill, Sci “The Fault in our Research” Show, and Kurz “We’re Sorry for Summarizing a Pop-Sci Book” Gesagt over the past few months. The author is a computer professional but their take is fully in line with what we normally post here.

I don’t have any choice sneers. The author is too busy hunting for whoever is paying SciShow and Kurzgesagt for these videos. I do appreciate that they repeatedly point out that there is allegedly a lot of evidence of people harming themselves or others because of chatbots. Allegedly.

  • V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 hours ago

    This is actually a very good channel, holy shit. Most of the content seems to boil down to “Clean Code sucks, so-called “10x developers” are primadonnas that make the world worse, testing is important, no, more important than that”. Old dude in the trenches since the 80s spitting the same straight facts I’ve been trying to teach everyone who’d listen for the measly 7 years of my career.

  • V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 day ago

    Wasn’t there a big deal about Kurzgesagt being associated with shady rationalist-like nonsense a long time ago? I remember my normie friends being like “what a shame, I thought it was such a good channel”…

    Haven’t heard about the other two but always happy to discover more popular wrong people to sneer at

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      21 hours ago

      They made a pro-longtermist video in association with open philanthropy a few years back, The Last Human or something like that, the summary was pretty open about the connection.

      I don’t think the shadiness is specific to rationalism, see also that bizarre KG video claiming it’s scientifically impossible to lose weight by exercising that coincided with the height of Ozempic’s hype.

      edit: The Last Human came out at 2022, the same year the McAskill book arguing longtermism was published, what a coinkidink.

  • corbin@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    19
    ·
    4 days ago

    The author also proposes a framework for analyzing claims about generative AI. I don’t know if I endorse it fully, but I agree that each of the four talking points represents a massive failure of understanding. Their LIES model is:

    • Lethality: the bots will kill us all
    • Inevitability: the bots are unstoppable and will definitely be created in the future
    • Exceptionalism: the bots are wholly unlike any past technology and we are unprepared to understand them
    • Superintelligent: the bots are better than people at thinking

    I would add to this a Plausibility or Personhood or Personality: the incorrect claim that the bots are people. Maybe call it PILES.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        4 days ago

        Hey while we’re here, I propose two more letters:

        S, standing for “stochastic parrot ignorance,”

        C, standing for “Chinese room does not constitute thought,”

        Now we can have ASS LICE