Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
I’m suing Grammarly over its paid AI feature that presented editing suggestions as if they came from me - and many other writers and journalists - without consent.
State law requires consent before someone’s name can be used for commercial purposes.
And here is the complaint, via evacide.
FT reports from Amazon insiders that they’re investigating the role AI-assisted development has played in a spate of recent issues across both the store and AWS.
FT also links to several previous stories they’ve reported on related issues, and I haven’t had the time to breach the paywalls to read further, but the line that caught my eye was this:
The FT previously reported multiple Amazon engineers said their business units had to deal with a higher number of “Sev2s” — incidents requiring a rapid response to avoid product outages — each day as a result of job cuts.
To be honest, this is why I’m skeptical of the argument that the AI-linked job losses are a complete fabrication. Not because the systems are actually there to directly replace the lost workers, but because the decision-makers at these companies seem to legitimately believe that these new AI tools will let their remaining workforce cover any gaps left by the layoffs they wanted to do anyways. It sounds like Amazon is starting to feel the inverse relationship between efficiency and stability, and I expect it’s only a matter of time before the wider economy starts to feel it too. Whether the owning class recognizes what’s happening is, of course, a different story.
So oil prices are down again, and on nothing but a promise from Trump and a promise from the EU. The economy has proved remarkably resilient to me; the attack on Iran is like, wild nonsense number 17 that the USA regime did that I thought would trigger a major recession, and didn’t.
I mean don’t get me wrong, things are much worse now than 3 years ago, clearly. But they’re not like, Great Depression worse. They’re not even 2008 worse. It’s just a certain level of degradation (cost of living is higher, purchasing power is lower, concentration of wealth is higher etc.) that people got used to as the new normal. People can get used to lots of things.
To make the IT analogy, I think the global economy is like Twitter. Sure, it feels like a Jenga tower held up by thoughts and prayers, but it’s holding up. When Musk took over I really did think his catastrophic management philosophy would completely break Twitter, but no, it trudges on. Yes, moderation is now nonexistent, and I’m told it’s down more often, and often in “soft downtime” like notifications not working, or DMs, or some other feature, or it’s working but slow, and so on. But clearly the site is up most of the time and more or less functional. Users just get used to degraded quality as the new normal.
I predict AWS will 1) get slower and costlier thanks to “AI”, with higher downtime, at higher stress for the workers; 2) the leadership will refuse to see or admit or even consciously be aware of this; 3) the worsened services will be the new normal. I predict similar developments for the socioeconomic situation of the world, too; though I’m not ruling out a spiral into complete recession, either.
I somewhat agree although when the “other shoe drops” and these things start impacting the money men they may start to realise AI isn’t the magic cure they thought it was (he says kind of hopefully)
6 hours of downtime for Amazon shopping. A very simple back of a napkin calculation. They made $213.4bn in sales in q4 2025. So divide that by 90 days and then 24 hours and multiply by 6… We are talking a $0.26bn loss for 6 hours downtime… That is not an insignificant amount of money. I imagine most bosses would be screaming for heads having lost that much money in sane non-hyper-scaled businesses.
It’s also a trend that I don’t see stopping without a major structural change. I don’t think there’s a point at which they’re going to say “we’ve cut enough corners and are going to stop risking stability and service degradation.” The principal structure driving the economy, especially in the tech sector, is organized around looking for new corners to cut and insulating the people who make those choices from accountability for their actual consequences.
to follow this one up: there is now a new study about AI agents being dogshit at keeping code working over the long term
Unfortunately the paper structure screams “AI senpai, notice me!”
AI coding agents seem bad at this job yet, but if you optimize for our benchmark…
the Pentagon’s CTO has AI psychosis now. sighhhhhhhhh
The whole argument can just be countered with “if the Pentagon believes Claude is sentient and a danger to the military, then why make a deal with OpenAI to use ChatGPT, another LLM similar to Claude? Wouldn’t that also be a danger of becoming sentient? and why are Pete Hegseth and Donald Trump planning to force Anthropic to comply after 6 months if they believe Claude shouldn’t be in the military?? Why did you ask Anthropic to let you use Claude for mass surveillance and autonomous weapons if you believed it was sentient and a danger??”
It just reeks of bullshit. “uhm actually we made Anthropic a supply chain risk because Claude is actually very dangerous and not because we’re doing banana republic shit to anyone who disagrees with us. we are a very responsible and safe government. please dont impeach trump.”
I wonder if one of the reasons Pete Hegseth is going so hard after Anthropic is that he and other idiots in the Pentagon unironically believes shit like AI 2027 and so wants to soft nationalize the frontier companies so to control the coming AGI. Considering that one of the uses the DoD allegedly wants LLMs for is fully autonomous weapons that at the very least have a very distorted view of what the technology is capable of. Or they want an accountability sink so they can kill people with even less accountability. …probably both.
I find it darkly hilarious that the doomer crit-hype is finally coming around to bite them, not in the form of heavy handed shut-it-all-down regulation to stop skynet, but in the form of authoritarian wackos wanting to make sure they are the ones “in charge” of skynet.
I wonder if one of the reasons Pete Hegseth is going so hard after Anthropic is that he and other idiots in the Pentagon unironically believes shit like AI 2027 and so wants to soft nationalize the frontier companies so to control the coming AGI.
That is absolutely the reason, or at least part of it. See: Pete Hegseth Got His Happy Meal and how AGI-is-nigh doomers own-goaled themselves
It’s possible the attempt to shove AI in every nook and cranny in the pentagon didn’t especially pan out and since his face was all over that project, he’s desperate for a scapegoat.
Like for sure he’d have had the logistics of the entire US army running smoothly despite layoffs by now, if it weren’t for the wokies in anthropic acting up.
Reading comments cause I was bored, and had the misfortune to stumble upon this horribly formatted piece of work allegedly written by Claude
OT: an interesting musing I found on fedi:

Silicon Valley is buzzing about this new idea: AI compute as compensation
These people are genuinely unhinged.
As the recent harpers article says:
"…people who should be in The Hague are giving [startups] twenty million dollars. Something bad is gonna happen here, something really fucking bad is gonna happen…”
“Selling your soul to the company store is not just fun, it is also invigorating!”
this is just wages paid in crypto but adapted to new era in a way that doesn’t make sense
Man, that harper piece is a full DnD alignment chart of the most online bay area weirdos you’ve ever seen.
DAIR, the AI-critical research organization founded by Timnit Gebru, is looking for a communications lead
Revealed: UK’s multibillion AI drive is built on ‘phantom investments’
Previously, on Awful, I predicted that Oracle would be all-in on the bubble:
Microsoft knows that there’s no money to be made here, and is eager to see how expensive that lesson will be for Oracle; Oracle is fairly new to the business of running a public cloud and likely thinks they can offer a better platform than Azure, especially when fueled by delicious Arabian oil-fund money.
But, uh, there’s not going to be any Arabian money while we’re dancing in the desert, blowing up the sunshine. The lawnmower is now running low on gas. Today, Oracle continues to make astoundingly bad business decisions:
Oracle is the only major player funding the AI buildout with debt, carrying over $100 billion on its books while free cash flow has gone negative.
I was not ready
AI was going to give us all universal healthcare but we didn’t believe hard enough and now all we have is this.
Chris Stokel-Walker at Fast Company reports:
High-level information about the private work of students and staff using ChatGPT Edu at several universities can be viewed by thousands of colleagues across their institutions due to a misunderstanding of what is being shared, according to a University of Oxford researcher who identified the issue.
The problem affects Codex Cloud Environments in ChatGPT Edu and exposes the names and some metadata associated with the public and private GitHub repositories that users within a university have connected to their ChatGPT Edu accounts. […] “Anyone at the university, or a large number of people at least—including me—can see a number of projects [people have] been working on with ChatGPT,” says Luc Rocher, an associate professor at the University of Oxford, who identified the issue and raised it with both the University of Oxford and OpenAI through responsible disclosure. He later approached Fast Company after what he felt was an inadequate response from both.
Just one of many reasons that the mere existence of “ChatGPT Edu” means that many people need to be tased in the nads
new development in ontology: “the ontology that makes ai models valuable is american”
“Our lethal capacities. Our ability to fight war.”
These are two different things. But I fear he doesn’t get that.
Actually the race-realism use last week, combined with this one, makes me realize that for them it’s just a fancy way of saying “world-view” [or what they consider to exist, and be true, which is not the craziest use of the word, but I would say unhelpful, and probably a small in-group marker].
It’s just a way of calling biases/prejudice legitimate.
And you know what, inasmuch the models have a “world-view” it IS annoyingly american in many ways. (at least the wrong kind of american.)
I was low-key hoping for a technical philosophical article, which argues that to find any of this shit useful you need a distinctly american understanding of reality.
you gotta give him a morsel of credit, he’s got his buzzword and he’s stickin’ to it
I mean I guess given how the current guy took a chainsaw to American soft power, industrial capacity, economic prospects, and so on I guess our wildly over funded military is probably the only comparative advantage we unambiguously hold onto.
Systemd
Jesus.
I’ve been advocating for a hall of fame of projects that explicitly reject LLMs; ctrl+f “Gentoo” on this very comment thread for the few examples I heard about.
Eh, straight pip with venv and pip-tools for support worked fine anyway.wrong uv!As for systemd… time to look at the BSDs? Was Debian among the anti-slop projects? Would be nice if they took an interest in preventing the slopification of one of their core system.
Different UV! Libuv is the event loop/scheduler that powers node.js. could be a funky new way to compromise a whole bunch of node applications
Edit: typo - although “nose applications” being compromised sounds bad too.
Ah, thanks! My expectations of node aren’t much affected I guess. Bun.js maybe?
libuvis a very common way to get a portable event loop. If you’re logged into GH and can use their search then you can look at the over fifty packages in nixpkgs depending on it. I used it when I developed (the networking and JIT parts of) the reference implementation for Monte, to give a non-nixpkgs example.
Turns out, that uv also sucks now!
Wew, Cory Doctorow sure is posting through it
https://pluralistic.net/2026/03/12/normal-technology/#bubble-exceptionalism
It’s true that these analogies can be stigmatizing, but they needn’t be. As someone with an autoimmune disorder, I am not bothered by people who describe ICE as an autoimmune disorder in which antibodies attack the host, threatening its very life.
This bothers me more than I can explain.
ICE as autoimmune disorder presupposes that it’s normally a good thing to have ICE around and it’s just malfunctioning as an exceptional state of things. If ICE is an immune system (malfunctional or not), what are we immigrants?
Yeah. When it comes down to it, the libs think the problem with Trump isn’t the fundamentals of what he is doing, it is that he is doing it without decorum or checking all the legal boxes or saying the usual lib pabulum to justify American imperialism. Skipping the legal checks and decorum is also bad, but in fact kids in cages was horrible when Obama was doing it the “right” way.
They’re not vibe-coding mission-critical AWS modules.
and
- It’s worse than that, they’re vibe coding critical operating system components
It is nuts to deny the experiences these people are having. They’re not vibe-coding mission-critical AWS modules. They’re not generating tech debt at scale:
https://pluralistic.net/2026/01/06/1000x-liability/#graceful-failure-modes
They’re just adding another automation tool to a highly automated practice, and using it when it makes sense. Perhaps they won’t always choose wisely, but that’s normal too. There’s plenty of ways that pre-AI automation tools for software development led programmers astray. A skilled, centaur-configured programmer learns from experience which automation tools they should trust, and under which circumstances, and guides themselves accordingly.
Wow, the whole thing is indefensibly capital-W wrong, just an utterly weird rose-tinted view of the current corporate experience.
centaur-configured programmer
Cory, baby, my dogg, sure “enshittification” was a big hit, but you can’t expect that your rough-draft followups are automatically gold
A skilled, centaur-configured programmer
This is like reading Yud mumbling about “Shoggoths”. It’s giving knight errant, organ-meat eater, Byronic hero, Haplogroup Rlb.
Man, due to a weird alignment of the spheres I started reading those Honor Levy excerpts in the voice of Max Payne-style hardboiled narration and it fits weirdly well? Like a bargain version of the same sort of mid-budget semi-affectionate parody of existential angst that’s all tone and minimal substance.
I am retrospectively disturbed by how well “I really came in a fluffer that time” slots into Dorothy Parker’s flow.
I mean she was undoubtedly too much of a lefty for the Thiel set to ever admit her influence, but I feel like that’s the exact type of vibe she’s trying and mostly failing to evoke.
Man, it’s frustrating to see him end up going down this route because the opening part of this is actually one of the better descriptions of AI psychosis I’ve seen, and i appreciate his emphasis on the way the delusion is built up in the sufferer’s mind rather than trying to game out what’s happening “inside” the chatbot. Even his point about how LLMs aren’t bad in exceptional ways for a new technology is pretty cogent. But his insistence on defending his own use of these things (and others who do so in “centaur-configured” ways) rather than thinking about how it interacts with all the relatively normal ways that this technology is wildly destructive is a very conspicuous blind spot.
Like, you can absolutely drive a nail with a phone book, and given the wider surface area it even has the advantage over a traditional hammer of being harder to smash your fingers. An individual craftsman may well decide that this is a useful tool and in some cases worth using over other options. But if the only source of these hammer-books was an industry that relied on massive uncompensated use of creative work passed through exploited third-world labor, ground rainforests to dust to create special “old-growth paper”, placed massive and unsustainable burdens on existing road infrastructure to collect these parts and deliver them, and somehow had been blown into a speculative bubble that represented something like a quarter of the entire US economy by promising that if they created a big enough book then one guy could hammer all the nails at once and they could lay off all the carpenters, I think it’s justifiable to look at the people using it as a normal tool and ask them “what the actual fuck are you doing?” The usage statistics they represent and the user stories they tell are used to justify not addressing any of the harms necessary to enable this tool to exist in its current form, and are largely driving the absurd valuations that keep pumping the bubble. Your individual role in those harms as a small-time user who finds it occasionally useful may be incalculably small, but it is still real.
Like, it feels like I agree with Doctorow on basically all the premises here. He seems to have a decent grasp on how the things actually work (even if he’s wrong about Ollama specifically being an LLM in its own right) and their associated limitations. He draws a decent line separating criticism from criti-hype. He is basically correct about how much of a bastard everyone involved in the industry at a high level is. But maybe because so many of these things aren’t really exceptional (save possibly in their sheer scale) he can’t seem to conceive of a world where things happen any differently, or of the role his actions and words play in reinforcing the status quo even as he writes pretty explicitly about how fucked up that status quo is.
Honestly it makes me think of the finale of his second Martin Hench novel, The Bezzle. After drilling into the business of the private prison operator that is making his friend’s life hell and separating the merely fucked up parts from the things that might actually have consequences if word got to what passes for cops in that tax bracket, he doesn’t go to the papers or start reaching out to the SEC. Instead he goes to the bastard at the head of it all and blackmails him into making his friend’s remaining incarceration less hellish and leaving him alone. And his friend, who started all this by begging for help unraveling this shit, rightly calls Marty a coward for it. There’s something ironic in seeing Doctorow here seemingly make the same judgement: abuse and apathy are sufficiently normal that we shouldn’t even bother to try and make the world better, just find ways to shelter ourselves and the people we care about from the consequences. And hell, I guess even there I’m not immune to it. There are reasons why I’m posting here and not waiting out front of a hotel with some engraved brass. Still, on the continuum of such things I’m disappointed that the guy who wrote that scene is stuck in the normalization blues.
It sucks. :(
Honestly, the article reminds me of Scott Alexander, but succinct. “Here are several true things and an absolutely batshit wrong thing, presented together with equal earnestness.”
The wrong thing being “Believing that LLMs are trash is a mental disorder (not really but wink wink).”
Why do this now, when it’s all coming apart? It’s baffling.
Take “Morgellons Disease,” a psychosomatic belief that you have wires growing in your body, which causes sufferers to pick at their skin to the point of creating suppurating wounds. Morgellons emerged in the 2000s, but the name refers to a 17th-century case-report of a patient who suffered from a similar delusion:
Nitpick but this is unusually sloppy for Doctorow. 1) People with Morgellon’s don’t believe they have wires growing out of sores, but fibres (which upon examination turn out to be cotton for clothes). 2) The original Morgellons is a putative children’s disease «wherein they critically break out with harsh Hairs on their Backs, which takes off the Unquiet Symptomes of the Disease, and delivers them from Coughs and Convulsions.» Which is quite different from the modern condition, whose sufferers have skin sores anywhere in the body with fibrous material looking like lint, dandelion fluff etc., and not particularly associated with convulsions. And 3) The association between the two was made by Miriam Leitao, a mother who believes her son suffers from the disease, and has gone to countless doctors and media trying to prove it’s real. So it’s an attempt to legitimise the postulated disease by cherry-picking something “historical” that vaguely resembles it.
Kind of wild that the guy who popularized “enshittification” as a term will die on the hill that the technology which drives the industrial enshittification of all human media is fine actually, because some people find the plugins useful.
He knows how LLMs work, right? This really is just cope because he got called out for being weird about using them. Really fucking disappointing
In the original post he kept referring to Ollama like it was an LLM instead of a server app that hosts LLMs so I’d say the jury’s out on that.
edit: Also, throughout this piece he keeps equivocating between local LLMs and their behemoth online counterparts with their heavily proprietary tooling that occasionally wraps them into a somewhat useful product.
I think he assumes that because he can load up a modest speech-to-text model locally and casually transcribe several hours of video resources in somewhat short order (this was apparently his major formative experience with modern AI) it works the same with e.g. coding.
Like, hey gpt-oss please make sense of these ten thousand lines of context without access to a hundred bespoke MCP intermediaries and one or three functioning RAG systems as I watch the token generation rate slow to a trickle while the context window gradually fills up.
piece he keeps equivocating between local LLMs and their behemoth online counterparts with their heavily proprietary tooling that occasionally wraps them into a somewhat useful product.
This is fundamental to his approach. He believes that technology is inherently liberatory as long as it’s in the hands of the consumer.
This really seems to be the case.
Hey, can’t get that SXSW London (a truly cursed event, but I digress) bag unless you’re willing to say LLMs Are Good, Actually
The one-shotting phenomenon (or how a positive initial experience with the technology seems to lead to a heavily biased view of its merits) should probably be considered a distinct cognitive bias at this point.
Turns out a lot of bright people can’t deal with a technology being utterly subjective in its efficiency, and also how that’s specifically the part that reduces it to being so narrowly useful as to force the existential question, given the insane resource burn and the socioeconomic disruption that’s part and parcel, even if like Doctorow you think that their rape and pillage of artist’s rights and intellectual property in general isn’t an especially big deal.
Also, local LLMs are hardly extricable from the whole mess, they are basically a byproduct, and updated versions only will keep coming as long as their imperial size online counterparts remain a viable concern.
It’s gotta be tied to the idea of anchoring, right? Like, the first credible bit of information you have is what sets the tone for everything that comes afterwards. At that point in a sufficiently complicated information ecosystem, confirmation bias kicks in and it’s hard to break out of.
even if like Doctorow you think that their rape and pillage of artist’s rights and intellectual property in general isn’t an especially big deal.
It’s not that he doesn’t think it’s a big deal. It’s the one thing he’s most consistently cared about for most of his career as an activist. He’s willing to put up with anything else if it circumvents copyright. And that’s why he’s been consistently pushed, I reckon, despite his nominal hostility towards the hands that feed the media ecosystem he flourishes in.
Probably should’ve written ‘not a deal breaker’ instead of not a big deal.
this is a good post and some of y’all may enjoy it too: https://dotart.blog/cobbles/ai-and-that-guy-at-the-bar
It was very good, and I’m glad I clicked through to the link to Robert Kingett’s story “The Colonization of Confidence”, which deserves its own highlight.
Even if the constant reminders that I’m trapped in the machine are painful.
Hey, he’s posted here before!
That story is rad as hell. I was ready to run through a wall for those folks at the end. Appreciate you, Robert!
I noticed that too which is an extra reason why I figured I’d drop the link and name in. His posts about receiving an LLM-generated happy birthday is something I think about surprisingly frequently.
I swear every time his stuff floats through here I end up standing as I read it and wildly gesticulating at my living room or ranting extemporaneously to my basement about something it made me think of or feel. After reading this piece I hope that comes off as more complimentary to his work than showing myself to be a freaking weirdo.










