Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. What a year, huh?)

  • mirrorwitch@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    23 minutes ago

    Copy-pasting my tentative doomerist theory of generalised “AI” psychosis here:

    I’m getting convinced that in addition to the irreversible pollution of humanity’s knowledge commons, and in addition to the massive environmental damage, and the plagiarism/labour issues/concentration of wealth, and other well-discussed problems, there’s one insidious damage from LLMs that is still underestimated.

    I will make without argument the following claims:

    Claim 1: Every regular LLM user is undergoing “AI psychosis”. Every single one of them, no exceptions.

    The Cloudflare person who blog-posted self-congratulations about their “Matrix implementation” that was mere placeholder comments is one step into a continuum with the people whom the chatbot convinced they’re Machine Jesus. The difference is of degree not kind.

    Claim 2: That happens because LLMs have tapped by accident into some poorly understood weakness of human psychology, related to the social and iterative construction of reality.

    Claim 3: This LLM exploit is an algorithmic implementation of the feedback loop between a cult leader and their followers, with the chatbot performing the “follower” role.

    Claim 4: Postindustrial capitalist societies are hyper-individualistic, which makes human beings miserable. LLM chatbots exploit this deliberately by artificially replacing having friends. it is not enough to generate code; they make the bots feel like they talk to you—they pretend a chatbot is someone. This is a predatory business practice that reinforces rather than solves the loneliness epidemic.

    n.b. while the reality-formation exploit is accidental, the imaginary-friend exploit is by design.

    Corollary #1: Every “legitimate” use of an LLM would be better done by having another human being you talk to. (For example, a human coding tutor or trainee dev rather than Claude Code). By “better” it is meant: create more quality, more reliably, with more prosocial costs, while making everybody happier. But LLMs do it: faster at larger quantities with more convenience while atrophying empathy.

    Corollary #2: Capitalism had already created artificial scarcity of friends, so that working communally was artificially hard. LLMs made it much worse, in the same way that an abundance of cheap fast food makes it harder for impoverished folk to reach nutritional self-sufficiency.

    Corollary #3: The combination of claim 4 (we live in individualist loneliness hell) and claim 3 (LLMs are something like a pocket cult follower) will have absolutely devastating sociological effects.

  • mirrorwitch@awful.systems
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    22 minutes ago

    OT: today the respiratory illness I’ve had for five days tested positive for Covid the first time just now.

    My symptoms are fairly mild, probably because I reinforced my vaccine three months ago. But I’m trying to learn more about these recent “swallowing razors” variants and dang! the online situation is bad. Finding reliable medical information in the post-slop, post-Trump Internet is a nightmare.

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 minutes ago

      Have a quick recovery! It sucks that society has collectively given up on trying to mitigate its spread.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      34 minutes ago

      Get well soon, yeah, I follow some people on bsky who have different opinions on Covid, and even there it is bad. (Doesn’t help than the one real expert I follow who mentions that ‘Covid is over’ (to massively oversimplify his points) treats any concern that it isn’t over as if people are antivaxxers and anti science, I don’t want to get involved, but several times I was close to replying ‘you are quite strawmanning their position and reading things into what they are saying which they are not’ but that would just get me blocked/called a covid truther or something so why bother).

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 hour ago

    Finally read Greg Egan - Permutation City, and looked at the LW discussion on the book, and I feel like they missed a lot of things (and why are they talking about the ‘dust theory’ like it is real). Oof, the wikipedia page on the themes and settings has a similar problem however. It is like they all ignored the second half of the book and the themes contained within it, not one mention of the city being called Elysium for example.

    And of course Zack_M_Davis thinks the plotline of the guy who killed the drug dealer (not a sex worker/prostitute lw people, please the text makes this pretty clear) he was in a lowkey romantic relationship with should have been cut (also not mentioned on the wikipedia page iirc).

    Also, nobody seems to talk about the broken sewage pipe.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 hour ago

    Just seen a clip of aronofsky’s genai revolutionary war thing and it is incredibly bad. Just… every detail is shit. Ways in which I hadn’t previously imagined that the uncanny valley would intrude. Even if it weren’t for the simulated flesh golems, one of whom seems to be wearing anthony hopkins’ skin as a clumsy disguise, the framing and pacing just feels like the model was trained on endless adverts and corporate speaking head videos, and either it was impossible to edit, or none the crew have any idea what even mediocre films look like.

    I also hadn’t appreciated before that genai lip sync/dubbing was just embarrassing. I think I’ve only seen a couple of very short genai video clips before, and the most recent at least 6 months ago, but this just seems straight up broken. Have the people funding this stuff ever looked at what is being generated?

    https://bsky.app/profile/ethangach.bsky.social/post/3mdljt2wdcs2v

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    New blogpost from Drew DeVault, titled “The cults of TDD and GenAI”. As the title suggests, its drawing comparisons between how people go all-in on TDD (test-driven development) and how people go all-in on slop machines.

    Its another post in the genre of “why did tech fall for AI so hard” that I’ve seen cropping up, in the same vein as mhoye’s Mastodon thread and Iris Meredith’s “The problem is culture”.

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    Signaling in the Age of AI: Evidence from Cover Letters

    Abstract We study the impact of generative AI on labor market signaling using the introduction of an AI-powered cover letter writing tool on a large online labor platform. Our data track both access to the tool and usage at the application level. Difference-in-differences estimates show that access to the tool increased textual alignment between cover letters and job posts and raised callback rates. Time spent editing AI-generated cover letter drafts is positively correlated with hiring success. After the tool’s introduction, the correlation between cover letters’ textual alignment and callbacks fell by 51%, consistent with what theory predicts if the AI technology reduces the signal content of cover letters. In response, employers shifted toward alternative signals, including workers’ prior work histories.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    8 hours ago

    LWer: Heritage Foundation has some good ideas but they’re not enough into eugenics for my taste

    This is completely opposed to the Nietzschean worldview, which looks toward the next stage in human evolution, the Overman. The conservative demands the freezing of evolution and progress, the sacralization of the peasant in his state of nature, pregnancy, nursing, throwing up. “Perfection” the conservative puts in scare quotes, he wants the whole concept to disappear, replaced by a universal equality that won’t deem anyone inferior. Perhaps it’s because he fears a society looking toward the future will leave him behind. Or perhaps it’s because he had been taught his Christian morality requires him to identify with the weak, for, as Jesus said, “blessed are the meek for they shall inherit the earth.” In his glorification of the “natural ecology of the family,” the conservative fails even by his own logic, as in the state of nature, parents allow sick offspring to die to save resources for the healthy. This was the case in the animal kingdom and among our peasant ancestors.

    Some young, BASED Rightists like eugenics, and think the only reason conservatives don’t is that liberals brainwashed them that it’s evil. As more and more taboos erode, yet the one against eugenics remains, it becomes clear that dysgenics is not incidental to conservatism, but driven by the ideology itself, its neuroticism about the human body and hatred of the superior.

    • rook@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      8 hours ago

      the conservative… wants… a universal equality that won’t deem anyone inferior.

      perhaps it’s because he had been taught his Christian morality requires him to identify with the weak

      Which conservatives are these. This is just a libertarian fantasy, isn’t it.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 hours ago

        I had to do a triple take on that “won’t deem anyone inferior” like what the fuck are you talking about. The core of conservatism is the belief in rigid hierarchies! Hierarchies have superiors and inferiors by definition!

        • fiat_lux@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 hour ago

          That depends on if you consider the “inferior” to be human, if they’re even still alive after the eugenics part.

      • mlen@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        Technically superman is a more correct translation for that word (similarly to how superscript is the thing beyond the script)

  • fiat_lux@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    14 hours ago

    Amazon’s latest round of 16k layoffs for AWS was called “Project Dawn” internally, and the public line is that the layoffs are because of increased AI use. AI has become useful, but as a way to conceal business failure. They’re not cutting jobs because their financials are in the shitter, oh no, it’s because they’re just too amazing at being efficient. So efficient they sent the corporate fake condolences email before informing the people they’re firing, referencing a blog post they hadn’t yet published.

    It’s Schrodinger’s Success. You can neither prove nor disprove the effects of AI on the decision, or if the layoffs are an indication of good management or fundamental mismanagement. And the media buys into it with headlines like “Amazon axes 16,000 jobs as it pushes AI and efficiency” that are distinctly ambivalent on how 16k people could possibly have been redundant in a tech company that’s supposed to be a beacon of automation.

    • sansruse@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 hours ago

      They’re not cutting jobs because their financials are in the shitter

      Their financials are not even in the shitter! except insofar as their increased AI capex isn’t delivering returns, so they need to massage the balance sheet by doing rolling layoffs to stop the feral hogs from clamoring and stampeding on the next quarterly earnings call.

      • fiat_lux@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        In retrospect the word quarterlies is what I should have chosen for accuracy, but I’m glad I didn’t purely because I wouldn’t have then had your vivid hog simile.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    15 hours ago

    “AI blunder in Aurskog-Høland [Norway] – children received water bills”

    The sources linked are all in norwegian, so you’ll have to translate them yourself if you’re interested, but Patricia’s summary seems reasonable. The government authority in question had to hire extra people to undo the mess that the ai system caused. There’s a commercial vendor involved somewhere, but if they were named I didn’t spot it.

    https://bsky.app/profile/did:plc:ybtyn5l4nljys46ijqtpldaw/post/3mdk7awabwk23

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    21 hours ago

    Kyle Hill has gone full doomer after reading too much Big Yud and the Yud & Soares book. His latest video is titled “Artificial Superintelligence Must Be Illegal.” Previously, on Awful, he was cozying up to effective altruists and longtermists. He used to have a robotic companion character who would banter with him, but it seems like he’s no longer in that sort of jocular mood; he doesn’t trust his waifu anymore.

    • lurker@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      19 hours ago

      kinda depressing seeing people fall for Yud’s shtick without realising about all the other bullshit (though in fairness the average person is not aware of the many years of rationalism lore). thankfully people in the comment section are more skeptical but still cautious, which I think is a fair reaction to all this

    • Evinceo@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      20 hours ago

      Wasn’t he on YouTube trying to convince people that Nuclear Energy is Fine Actually? Figures.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      1 day ago

      Chris Lintott (@chrislintott.bsky.social‬):

      We’re getting so many journal submissions from people who think ‘it kinda works’ is the standard to aim for.

      Research Notes of the AAS in particular, which was set up to handle short, moderated contributions especially from students, is getting swamped. Often the authors clearly haven’t read what they’ve submitting, (Descriptions of figures that don’t exist or don’t show what they purport to)

      I’m also getting wild swings in topic. A rejection of one paper will instantly generate a submission of another, usually on something quite different.

      Many of these submissions are dense with equations and pseudo-technological language which makes it hard to give rapid, useful feedback. And when I do give feedback, often I get back whatever their LLM says.

      Including the very LLM responses like ‘Oh yes, I see that <thing that was fundamental to the argument> is wrong, I’ve removed it. Here’s something else’

      Research Notes is free to publish in and I think provides a very valuable service to the community. But I think we’re a month or two from being completely swamped.

      • Evinceo@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        20 hours ago

        people who think ‘it kinda works’ is the standard to aim for

        I swear that this is a form of AI psychosis or something because the attitude is suddenly ubiquitous among the AI obsessed.

        • fullsquare@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          9 hours ago

          they prompted so hard and that’s all they get, so obviously there’s nothing better and they stop st that

          they’ll do anything except actually learn shit or put in effort

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        13
        ·
        21 hours ago

        It gets worse:

        One of the great tragedies of AI and science is that the proliferation of garbage papers and journals is creating pressure to return to more closed systems based on interpersonal connections and established prestige hierarchies that had only recently been opened up somewhat to greater diversity.