

heads up I heavily edited the post while you were responding though I don’t think the essence changed. I added their reasoning for only allowing image generation for paying subs and expanded the table a bit.
It’s not always easy to distinguish between existentialism and a bad mood.


heads up I heavily edited the post while you were responding though I don’t think the essence changed. I added their reasoning for only allowing image generation for paying subs and expanded the table a bit.


In response to the outcry image generation is now turned off for non-paying users. Ostensibly it’s so people who are not using Grok as god intended can be identified via their subscription information, but I can’t help but think it’s just Elon explicitly monetizing grok’s CSAM capabilities.
| Grok subscription tier | Free | Premium |
|---|---|---|
| CSAM Generation | - | ✔️ |
| Revenge porn generation | - | ✔️ |
| Create images of women shot and killed | - | ✔️ |


It’s not just the anglo governments dropping the ball, there’s also the various app stores who simply won’t enforce their own explicit rules and ban the Grok app, I guess because the notion of accountability for the effects of AI slop must remain unthinkable for as long as possible.


If the great AI swindle has taught us anything, is that what’s good for normal people isn’t really important when all the macro-economic incentives point the other way and towards the pockets of the ultra rich.
As of April 2025, only 17% of Americans thought AI would have a positive effect on the US over the next 20 years. Only 23% thought AI would be positive for how people do their jobs.
robert anton wilson intensifies


Still, it merits pointing out that this explicitly isn’t happening because the private sector is clamoring to get some of that EY expertise on nothing the moment he’s available, but because MIRI is for all intents and purposes a gravy train for a small set of mutual acquaintances who occasionally have a board meeting to decide how much they’ll get paid that year.

The way it actually works is that I’m on the critical path for our organizational mission, and paying me less would require me to do things that take up time and energy in order to get by with a smaller income. Then, assuming all goes well, future intergalactic civilizations would look back and think this was incredibly stupid; in much the same way that letting billions of person-containing brains rot in graves, and humanity allocating less than a million dollars per year to the Singularity Institute, would predictably look pretty stupid in retrospect. At Singularity Institute board meetings we at least try not to do things which will predictably make future intergalactic civilizations think we were being willfully stupid. That’s all there is to it, and no more.
This is from back when MIRI, then Singularity Institute, was paying him like $120K/y – https://www.lesswrong.com/posts/qqhdj3W3vSfB5E9ss/siai-an-examination?commentId=4wo4bD9kkA22K5exH#4wo4bD9kkA22K5exH


OpenAi yearly payroll runs in the billions, so they probably aren’t hurting.
That Almsost AGI is short for Actually Bob and Vicky seems like quite the embarrassment, however.


Apparently you can ask gpt-5.2 to make you a zip of /home/oai and it will just do it:
https://old.reddit.com/r/OpenAI/comments/1pmb5n0/i_dug_deeper_into_the_openai_file_dump_its_not/
An important takeaway I think is that instead of Actually Indian it’s more like Actually a series rushed scriptjobs - they seem to be trying hard to not let the llm do technical work itself.
Also, it seems their sandboxing amounts to filtering paths that star with /.


Very good read actually.
Except, from the epilogue:
People are working to resolve [intelligence heritability issue] with new techniques and meta-arguments. As far as I understand, the frontline seems to be stabilizing around the 30-50% range. Sasha Gusev argues for the lower end of that band, but not everyone agrees.
The not-everyone-agrees link is to acx and siskind’s take on the matter, who unfortunately seems to continue to fly under the radar as a disingenuous eugenicist shitweasel with a long-term project of using his platform to sane-wash gutter racists who pretend at doing science.


They made a pro-longtermist video in association with open philanthropy a few years back, The Last Human or something like that, the summary was pretty open about the connection.
I don’t think the shadiness is specific to rationalism, see also that bizarre KG video claiming it’s scientifically impossible to lose weight by exercising that coincided with the height of Ozempic’s hype.
edit: The Last Human came out at 2022, the same year the McAskill book arguing longtermism was published, what a coinkidink.


Hyperstition is such a bad neologism, apparently doubleplus superstition equals self-fullfilling prophecy (transitive)? They don’t even bother to verb it properly… Nick Land got a nonsense word stuck in his head and now there’s a whole subculture of midwit thought leader wannabes parroting that shit.


Additionally he said something to the effect of I don’t blame you for not knowing this, it wasn’t effectively communicated to the media like it’s no big deal, which isn’t really helping to beat the allegations of don’t ask don’t tell policies about SA in rat related orgs.


[SBF’s] psychiatrist, George Lerner, worked in the same office as Scott Alexander IIRC (I’ve lost track of the source, will post later if I can find it).
It was in an ACX blog post, siskind just admitted it out of nowhere. edit: Well ok because he was obviously discussing him, but the possibility of any connections between them wasn’t really on anyone’s radar by then I think.
edit: Got it: https://www.astralcodexten.com/p/the-psychopharmacology-of-the-ftx#footnote-anchor-1-84889532


OpenAI Declares ‘Code Red’ as Google Threatens AI Lead
I just wanted to point out this tidbit:
Altman said OpenAI would be pushing back work on other initiatives, such as advertising, AI agents for health and shopping, and a personal assistant called Pulse.
Apparently a fortunate side effect of google supposedly closing the gap is that it’s a great opportunity to give up on agents without looking like complete clowns. And also make Pulse even more vapory.


The kids were using Adobe for Education. This calls itself “the creative resource for K–12 and Higher Education” and it includes the Adobe Express AI image generator.
I feel the extent to which schooling in the USA is of the this arts and crafts class brought to you by Carl’s Jr™ variety is probably understated.


Could be part of its RLHF training, frequent emphasized headers maybe help the prediction engine stay on track for long passages.


/r/SneerClub discusses MIRI financials and how Yud ended up getting paid $600K per year from their cache.
Malo Bourgon, MIRI CEO, makes a cameo in the comments to discuss Ziz’s claims about SA payoffs and how he thinks Yud’s salary (the equivalent of like 150.000 malaria vaccines) is defensible for reasons that definitely exist, but they live in Canada, you can’t see them.



Graham Linehan is a normal and well man.
A few hours later, he sends me an example of how he’s been using AI. It’s a “hidden role deduction” game he’s working on. At the top is the prompt he put into ChatGPT: “You are five blind lesbian adventurers out for a good night out. Slaying dragons and whatnot. But one of your number is a hulking great troll pretending to be a woman. Find the troll lesbian and then devise an amusing punishment without giving him an erection.


No idea if it was intentional given how long a series’ production cycle can be before it ends up on tv/streaming, but it’s hard not to see Vince Gilligan’s Pluribus as a weird extended impact-of-chatbots metaphor.
It’s also somewhat tedious and seems to be working under the assumption that cool cinematography is a sufficient substitute for character development.


Hasn’t it lately become increasingly possible that the files of famous financier and MIRI donor J. Epstein will be finally released to public cognizance?
https://archive.is/20260109131721/https://www.theverge.com/news/859309/grok-undressing-limit-access-gaslighting
Turns out even the paywalling is fake since you can still do the edits by accessing grok from other parts of the interface like context menus, you just can’t outright ask it in a tweet.