Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. If you’re wondering why this went up late, I was doing other shit)
from Rusty https://www.todayintabs.com/p/a-i-isn-t-people
Imagine you have two machines. One you can open up and examine all of its workings, and if you give it every picture of a cat on the whole internet, it can reliably distinguish cats from non-cats. The other is a black box and it can also reliably distinguish cats from non-cats if you give it half a dozen pictures of cats, some apple sauce, and a hug. These machines sort of do the same thing, but even without knowing how the second one works I am extremely confident in saying it doesn’t work the same way as the first one.
there is some reason to think this way. also keep in mind that a segment of that anti-americanism was funded by sales of iranian oil. not all of course, but houthis wouldn’t be a thing without it, or large parts of hezbollah, for example. of course what people want and how it shakes down after the bombs drop is different thing entirely, i guess we’ll see, eventually (i assume that decision to strike was already made)
The best thing an unpopular regime can ask for is the enemy they have been bigging up as literally The Great Satan starts dropping bombs and missiles on the populace that hates it.
“If we bomb people and show their government can’t protect them, they will turn against the government and we will win” has been tried by the Germans on Londoners, the Allies on Germany and Japan, and the US on Serbia, and it didn’t work.
That’s cute, how about you find me a source that isn’t a spooky blob think tank?
Or better yet, enlist and we can rid the world of another Sam Harris fanboy
i don’t give a shit about sam harris. if iranians were broadly fine with theocracy, there wouldn’t be 30k+ dead protesters last month, or major protests every year for a decade. like every other country on earth, you can expect that iran secularizes, except that apostasy or conversion is capital offense, or any significant dissent for that matter, so any survey unaffected by self-censorship would be hard to conduct
While there is absolutely a large segment of the Iranian population that isn’t satisfied with the theocratic dictatorship, the same could also have been said of Iraqis who didn’t like the baathists or Afghans who hate the Taliban. Once you start dropping bombs on these people - to say nothing of the violence that necessarily follows a boots-on-the-ground occupation - you’re going to start driving them into the waiting arms of factions that oppose you. Especially because the current administration has shown a less-than-comforting attitude towards civilian casualties, war crimes, and genocide.
Let’s also not lose sight of the role that US and British intervention played in creating the circumstances for the Ayatollahs to come to power in the first place. The Shah wasn’t exactly any kinder to the Iranian people and was a foreign puppet to boot.
Harris’s take only works if, like him, you assume that the fundamental problem with Iran is Islam, rather than actually bothering to look at the history of the country and how it became what it is today. Because in that case once you get the ayatollah out of the way and introduce the light of Science! to the people they’ll immediately become rational civil libertarians and believe exactly the same things he does. The Irreligious Right is exactly as reductive and stupid as the worst evangelicals, but can better use the language of STEM to hide it.
@YourNetworkIsHaunted @fullsquare Yes, absolutely, civilization (or whatever word you like better here) will not happen automatically or magically.
And I’m not finding an answer: How do you /properly/ remove an oppressive theocracy, in such a way that the country has good starting conditions to prosper?
Two things seems clear to me: the theocrats will not go by themselves, and the country will not prosper under them.
@Ardubal @YourNetworkIsHaunted @fullsquare This hasn’t happened in Iran, but oppressive theocracies *have* decayed from inside elsewhere—notably Ireland since 1980 (the difference now is as night and day, yet there was no revolution and no shooting, and the country has prospered). Arguably Spain’s clerico-fascist system went the same way in the 1970s. And so on.
Iran is different, though, in that it faces a violent, powerful external superpower, which indirectly props up the priesthood.
@cstross @YourNetworkIsHaunted @fullsquare OK, but I don’t see the automatism in that direction either. And just letting them simmer in their own little cosmos doesn’t seem very sustainable when they organize and support e. g. Hamas, Hezbollah, and Houthi.
Starting a war now is not the answer, I’m pretty sure, but the question remains.
IBM stocks take a tumble after anthropic release a COBOL skill - the rational market strikes again.
I wrote up my take here but TL;DR - a few markdown files telling Claude it’s an expert at COBOL development aren’t going to unpick decades of risk averse behaviour from bank and government cios. Similar to the SaaSpocalypse this is pure nonsense. Investors don’t tend to let reality dissuade them though.
cobol is old and scary, so a chat bot spitting out cobol that someone without grey hair cant fully comprehend is enough for them to deem it fully automated and defeat of the dinosaur. reality you are right, it wont move the needle.
I feel like the story of Cassandra would be much more gratifying if she’d had access to powered armor.
404 Media: Meta Director of AI Safety Allows AI Agent to Accidentally Delete Her Inbox
Yue also shared screenshots of her WhatsApp chat with the OpenClaw agent, where she implores it to “not do that,” “stop, don’t do anything,” and “STOP OPENCLAW.”
This is very serious computing and we must all take it very seriously.
this is like the fourth time an AI agent has completely deleted something important (I remember an article about an AI deleting all of a scientists’s research) How many more times does it have to happen before people stop using AI to look after something important???
The promptfondlers did it, they made a computer which doesn’t do what you tell it to do
A computer that both does what you don’t tell it to do and doesn’t do what you tell it to do. I didn’t think we could do it but - I tell you what - it’s been done.
Maybe I should apply to be a director of AI safety at Meta. I know one safety measure that works: don’t use AI.
What, Ctrl-C wouldn’t work? kill -9?
Before they could ask grok how to stop a process it was already too late.
Not that it mattered as Groks advice to become the reichschancellor actually didnt fix this problem.
You assume these people installing experimental non deterministic software on their computer would know how to purge a process (or, you know, not to hook up vibe coded slop to their inbox) but here we are. To get a director job in a big company, the main thing you need is an MBA, a willingness to do whatever the CEO asks of you and either a sociopathy or psychopathy diagnosis (sorry for the repetition, I know I already said MBA). Technical skills “nice to have”
MicroSlop’s new xbox CEO has a background in AI and is worried about birthrates.
Can’t wait for her lesswrong handle to leak.
The article tries to fact check Asha Sharma’s (the new CEO) claim that
fertility rates are declining, the average birthrate in the ’90s when we were growing up was, like, 3, and now it’s 2.3, and in 2050 it’s estimated to be below replacement
Unfortunately, they forgot that other countries than the US exist and didn’t occur to them that she could be talking about global fertility rates. In which case the claim is pretty much correct.
Embarrasing.
I mean, sure, but it’s still the CEO of XBOX on her second day on the job throwing her hat in the legendarily sus declining birthrates discourse in service of AI solutionism, it’s not nothing.
Usually AI boosters are claiming that soon most humans will be economically useless, not that it would be terrible if there were fewer white people. One reason people avoid having children is that they feel economically insecure and doubt there will be respected places in society for their offspring.
Dwarkesh Patel is the only other Indian American I have seen who is friends with our friends.
From fellow traveler stats consultant John Mount:
https://johnmount.github.io/mzlabs/JMWriting/WeAreCookedLLMs.html
Somehow he manages to touch on so many different subplots, a shotgun sneer instead of snipe
if “tech-bro” plus a LLM is a “100x engineer”, then “bro” isn’t needed for much longer as the LLM alone must be a “99x engineer.” However, I don’t think “bro plus” is often really a 100x engineer, and the LLM alone isn’t a 99x engineer. However, “bro plus” may outlast their peers who make the mistake of trying to do the actual work in place of talking LLMs up.
The above may or may not be the case. But if it is, then it is the LLM-bros (which include non-technologists, con artists, financiers, men and women) that are destroying everything - not the LLMs.
The problem with this iteration is the full court press of finance and technology. The major players are using financing to dump results at a price way below production costs. This isn’t charity, it is to demoralize and kill competition.
claiming “after we take over the world we will consider adding Universal Basic Income (UBI)”. The LLM bros already have a lot of the money, and they are not even rehearsing diverting it into basic income now. Why does one believe they would do that when they also have all of the power?
You don’t have to hand it to Altman, but he did fund the largest UBI experiment through Open Research with his il gotten gains. OTOH, one interpretation of that data was that UBI “decreases the labor supply” which was then used directly as an argument against it.
Any worry about scope or power of LLMs is fed back as an alignment threat so dire that only the current LLM leaders should be allowed to continue work (inviting regulatory capture). Any claim the LLMs don’t work is fed back as “you are prompting it wrong”
Orbital deployment makes all of radiation tolerance, connectivity, power, maintenance, and heat dissipation much harder and much more expensive. We are still at a time where putting an oven or air-frier in space is considered noteworthy (China 2025, NASA 2019 ref).
air friers IN SPACE ha
I am more worried about the LLM-bros and their auto-catalytic money doomsday machine than about the LLMs themselves.
100% - ACMDM is a nice turn of phrase as well.
if a Franciscan priest gets really good at basketball, is he considered an air friar
deleted by creator
https://www.adexchanger.com/ai/one-chatbots-journey-to-introducing-ads-that-dont-suck/
Often, the ad loads before the chatbot’s query response, said Baird, and Koah’s goal is to “deliver such a relevant result to the user that they just click on the ad before the result loads.”
LLM’s bad performance and inefficiency is a feature to /someone/. And chatbots are themselves not immune to enshitification.
sharing this channel’s posts are the equivalent of shooting fish in a barrel but http://youtube.com/post/UgkxoSpDpLNEr9WawVXnl5Mlw4NeQ6-XsLjl this really just feels like an excuse to repost that METR graph. also wtf is the graph on top
Ladybird stans on SuicideWatch rn
I love* how the AI stans never get tired of proselytizing.
The result was about 25,000 lines of Rust, and the entire port took about two weeks. The same work would have taken me multiple months to do by hand.
*Love, as in: "I love to get my eyelids scraped with a cheesegrater.
stupid question I probably asked already in the past: dafuq is a ladybird?
Imagine if a browser was fascist
I really need a way to forget things in manner where I at least remember that I do not need to know certain things.
Unfortunately booze is the blunt instrument I have, so bottoms up.
A WIP browser implementation.
and the
Wis for “wailing”
Imagine shaving a racist yak
Looks like they’re gonna ruin BattleBots with AI somehow. Bright Data appear to be web scraping bastards as a service.
I’ll never forgive them for what they did to the 80 lb slab of rotating steel.

Oh god, Bright Data…
I know of them because they were trying to get into the Minecraft modding sphere by offering people to put their ““SDKs”” into their mods. (though not the full SDK initially, just Telemetry™ about what kinds of computers modded players have)
https://notes.highlysuspect.agency/blog/who_is_bright_data/ has more info about it
Thanks, this is a nifty read; I’m appreciating having a look into the world of the bastards who are ruining the web with residential proxy/botnet operations. I had kind of (mistakenly) assumed that the scrapers mostly relied upon IoT trash and hacked Fire sticks. We really can’t have nice things anymore, huh?
The company is embroiled in legal action in Israel. After it filed suit against a former employee, he countersued, alleging that Luminati is widely used for click fraud. As part of the suit, it was revealed that the spyware company NSO Group was a Luminati client.
Well that escalated quickly
PS: i really really wish my special interests would quit touching
https://futurism.com/artificial-intelligence/rentahuman-musk-ai h/t naked capitalism
Liteplo is the genius behind RentAHuman, an online marketplace where humans can lease out their bodies to autonomous AI agents.
gah
Last week, Wired writer Reece Rogers offered his body up to the platform, finding that most of the jobs offered were scams to promote other AI startups.
lmao of course they were
it’s always the Elon Musk fans isnt it.
and on the topic of Futurism articles on Elon Musk: https://futurism.com/future-society/court-trouble-jury-hates-elon-musk
one word: LMFAOOOO
Forget who said it (I think e.w. niedermeyer) but if you were a true Musk Hater you would lie your way into that jury no matter the cost
It takes dedication, but the payoff is too big to not try
Not… sneer? What is this?!
Nuke your socials for the trial
Hardest choices, strongest wills, etc.
Imagine the book you could write at the end
Revolutionary Sneerpuku
Well, just dont use your real name online.
Starting this Stubsack off with one programmer’s testimony on the effects of the LLM rot:
For the record, I work at a software company that employs ~10k developers.
Before LLMs, I’d encounter [software engineers that seem completely useless or lacking in basic knowledge] a couple of times a month, but I interact with a lot of engineers, specifically the ones that need help or are new at the company or industry at large, so it’s a selected sample. Even the most inexperienced ones are willing and able to learn with some guidance.
After LLMs, there’s been a significant uptick, and these new ones are grossly incompetent, incurious, impatient, and behave like addicts if their supply of tokens is at all interrupted. If they run out of prompt credits, its an emergency because they claim they can’t do any work at all. They can’t even explain the architecture of what they are making anymore, and can’t even file tickets or send emails without an LLM writing it for them, and they certainly lack in any kind of reading comprehension.
It’s bleak and depressing, and makes me want to quit the industry altogether.
Jesus fucking christ I need to invent a time machine so I can go back and make my past self be an electrician instead because this. Commercial software engineering has absolutely been captured by some of the silliest people and trends out there.
Article on the Ick generated by AI shit from the perspective of a woman “They Built Stepford AI and Called It “Agentic””, talking about how women adopt it less, and gives a reason why this might be so.
On a personal note (I’m a man for the record), while I normally get the uncanny valley effect a lot less than normal people, I do notice it a lot with AI generated people, really odd experience that.
(Author does seem to be a pro AI person however).
E: thanks everybody being so critical about it, should have read the whole article (and not ignored the substack red flag) before posting it here so uncritically.
some parts intriguing, but mostly disappointing. several chunks of the text felt AI-generated. no fewer than 34 “it’s not X but Y”'s, by my count, and the out-of-nowhere typographies / tables definitely smell of slop. and obviously, the images definitely were. (can’t even be bothered to fix the typos in photoshop? why make a fake poster for The Stepford Wives??)
some notes:
-
i’m not entirely convinced the revulsion response in women can be explained entirely as a reflective recognition of the subjected female self. maybe it’s also because AI art is entirely bland and/or fuck ugly
-
some reproductive labors, in the Marxist-feminist sense, are getting subsumed by AI, sure, but they’re largely the ones that already got subsumed by the computer. we had pagers with scheduling and appointment reminders in the 80’s. about the only thing an LLM can do that our previous tech couldn’t is the customer service / “emotional labor” part, albeit poorly. and the other labors are non-optional – my laundry actually does have to go in the dryer, and no matter how many plastic pictures of clean clothes i generate, they can’t actually go in my closet.
-
speaking of, the article appears to use a mangled paraphrase of that Joanna Maciejewska tweet (“I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes”), and then attributes it to “AI enthusiasts” (ew).
-
the article notes that reproductive labor is coded feminine and that the assistants that (attempt to) do this labor are designed female, with feminine voices and affects, despite being, y’know, robots. and not women. the next step to me would be to note that this isn’t just reflecting the subjectification of the female and the designation of women to a particular labor class, but actually aiding to construct and reproduce the subject of “female” itself too. maybe throw some Butler in there. but we just breeze right past this. no third-wave? i don’t see any feminist arguments past the 80’s in here
-
the typography of wives is total bullshit. “The Open-Source Wife” fuuuuucccckk offfff. but. BUT. i do think there is something correct in there about xAI/Grok/Ani basically being the modern adaptation of Vivian James
-
there’s an argument that obviously used to be about AI art, and got transmogrified into a nonsense concept, bordering on colorless green ideas.
Women’s labor is being extracted, automated, and sold back without credit.
-
the nonsense below it about “alignment” clearly intends to imply that the machines are only faking being our friends / submissive wives(!!1!).
-
but this is okay because women are uniquely suited to interface with AI! this is because (all) women (innately) communicate with the goal of building relationships (female) instead of the utilitarian (manly) execution of transactions (male). there’s an odd essentialist undercurrent that’s not really being challenged here, despite the fact that that would render “female robots” impossible
-
“outsource-maxxing” fuuuuuucuk youuuuuuu
-
the conclusion of the article is basically “women are uniquely capable of interacting with (female) AI because they’ve BEEN the female AI”, with a call-to-action for women to basically… well. resume that role, except now using the AI as your girlbestfriend.
Thanks for the deep dive ln stuff that is wrong with it (also the others).
-
I started to raise my eyebrows when the Second Brain got lumped into the AI wife pile.
Bro, I just write shit down. I am in fact taking responsibility for my schedule and handling my emotions without relying on external support. Am I turning to (checks notes…) the notebook industry for a technological replacement wife?
I mean some valid points, and some of it might explain the gendered AI adoption gap, but too much generalization.
This is ahistorical slop. Previously, on Lobsters, I explained the biggest tell here: the overuse and misuse of em-dashes. There’s also some bad sentence structure and possibly-confabulated citations to unnamed papers. The images can’t be trusted.
The worst problem here is that the article believes that history starts about halfway through the Industrial Revolution. Computing was not gendered prior to the Harvard Computers in the 1880s. Prior to the Industrial Revolution, women spent most of their time on textiles and were compensated for their time and labor; there is a series from Bret Devereaux on the details in ancient and pre-industrial Europe, and a decent summary on /r/AskHistorians of the industrial transition from about 1760 to 1860. The article suggests that the Victorian way of treating women as nannies and housewives was historically universal. Claude identifies as non-binary (or, rather, Claude’s authors told it to identify as such) but uses male pronouns when pressed into a binary theory. The Creation of Patriarchy is a real book but only describes the origins of masculine Abrahamic beliefs rather than some sort of unifying principle, and is easily disproven in its universality by looking at contemporary ancient societies like Sparta or the Iroquois Confederation; there’s also a Devereaux series on Sparta.
The author’s gotta be one of the clearest demonstrations of critihype seen yet. She is selling an anthology on Amazon called How Not To Use AI, which presumably she forgot to consult prior to prompting this essay.
Interesting link but it moves into AI hype near the end.
Yeah was quite disappointed by that, also the anthropomorphization of AI by the end.
the metr graph has gotten weird https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/ the 50% success rate graph went from 6 hours to 14 hours, but the 80% success rate graph only went from 55 minutes to 1 hour and 3 minutes. I have an itch that it’s a fluke or outlier but it’s also very possible that LLM coding’s just weird like that
You’re giving them too much credit. The entire methodology of “determine how long it takes humans to do a task and use that as a proxy for difficulty” was somewhat abstract and questionable in the first place, but with good rigorous implementation, it might have still been worthwhile.
However, their actual methodology is awful. Most of their tasks only have 3 or so human attempts to do them to create a baseline (from a relatively small pool of baseliners), and for longer tasks, they entirely went with a guess-estimate on task completion time. The error bars they show are just for the model trying to do the task (and they are already absurdly big, especially for this most recent jump), if you added in error bars accounting for variability in the task baseline itself, the error bars would get even bigger.
This blog goes into more details explaining the nuances of the problems with their methodology: https://arachnemag.substack.com/p/the-metr-graph-is-hot-garbage
To give a simple example, if the numerous problems resulted in a systematic bias on task estimation, linear improvement could easily look exponential. To give a simple example of how that is possible if they had 5 tasks that had a true baseline (putting aside questions of methodology validity such that true is even meaningful) of 15 minutes, 30 minutes, 45 minutes, 1 hour, and an hour and 15 minutes (respectively) but flaws with human baseliners (for example, lacking specialized skills for longer tasks, phoning it in because they are paid by the hour, metr guesstimating the task time), they had numbers for those 5 tasks of 15 minutes, 1 hour, 2 hours, 4 hours, and 8 hours, successive improvements to get to 50% success on each task would look exponential even though they are actually linear improvements.
METR maybe deserves a tiny bit of credit for trying something even vaguely related to practically meaningful task (compared to all the completely irrelevant bs benchmarks that would be worthless even if they were accurate). But I wouldn’t give them any more credit than that, its just that the bar is so low.
Broke: The METR studies are the best research on impacts of AI productivity available today.
Woke: The METR studies are hot garbage.
Bespoke: Both. It’s both.
That a great summary and an accurate indictment of the “study” of LLMs.
Doing what METR tried to do right would in fact be really expensive and hard, but for something that the fate of the world allegedly depends on (according to both boosters and doomers) you think they would manage to find the money for it. But the LLM companies don’t actually want accurate numbers, they want hype.
oh yeah I 100% agree that their methodology is flawed, and that blog does a pretty good job of outlining the issues. I just thought the absolutely huge gap was both interesting and funny. Their absolutely huge error bars are not a good sign, between that and the gap it really feels like someone screwed up















