

Guns don’t kill people, people kill people.
Guns don’t kill people, people kill people.
Lesswrong and SSC: capable of extreme steelmanning of… check notes… occult mysticism (including divinatory magic), Zen-Buddhism based cults, people who think we should end democracy and have kings instead, Richard Lynn, Charles Murray, Chris Langan, techbros creating AI they think is literally going to cause mankind’s extinction…
Not capable of even a cursory glance into their statements, much less steelmanning: sneerclub, Occupy Wallstreet
we cant do basic things
That’s giving them too much credit! They’ve generated the raw material for all the marketing copy and jargon pumped out by the LLM companies producing the very thing they think will doom us all! They’ve served a small but crucial role in the influence farming of the likes of Peter Thiel and Elon Musk. They’ve served as an entry point to the alt-right pipeline!
dath ilan?
As a self-certified Eliezer understander, I can tell you dath ilan would open up a micro-prediction market on various counterfactual ban durations. Somehow this prediction market would work excellently despite a lack of liquidity and multiple layers of skewed incentives that should outweigh any money going into it. Also Said would have been sent to a reeducation camp, quiet city and sterilized denied UBI if he reproduces for not conforming to dath ilan’s norms much earlier.
That too.
And judging by how all the elegantly charitably written blog posts on the EA forums did jack shit to stop the second manifest conference from having even more racists, debate really doesn’t help.
Yes, thanks. I always forget how many enters i need to hit.
I’m feeling an effort sneer…
For roughly equally long have I spent around one hundred hours almost every year trying to get Said Achmiz to understand and learn how to become a good LessWrong commenter by my lights.
Every time I read about a case like this my conviction grows that sneerclub’s vibe based moderation is the far superior method!
The key component of making good sneer club criticism is to never actually say out loud what your problem is.
We’ve said it multiple times, it’s just a long list that is inconvenient to say all at once. The major things that keep coming up: The cult shit (including the promise of infinite AGI God heaven and infinite Roko’s Basilisk hell; and including forming high demand groups motivated by said heaven/hell); the racist shit (including the eugenics shit); the pretentious shit (I could actually tolerate that if it didn’t have the other parts); and lately serving as crit-hype marketing for really damaging technology!
They don’t need to develop protocols of communication that produce functional outcomes
Ahem… you just admitted to taking a hundred hours to ban someone, whereas dgerad and co kick out multiple troublemakers in our community within a few hours tops each. I think we are winning on this one.
For LessWrong to become a place that can’t do much but to tear things down.
I’ve seen some outright blatant crank shit (as opposed to the crank shit that works hard to masquerade as more legitimate science) pretty highly upvoted and commented positively on lesswrong (GeneSmith’s wild genetic engineering fantasies come to mind).
I missed that it’s also explicitly meant as rationalist esoterica.
It turns in that direction about 20ish pages in… and spends hundreds of pages on it, greatly inflating the length from what could be a much more readable length. It then gets back to actual plot events after that.
I hadn’t heard of MAPLE before, is it tied to lesswrong? From the focus on AI it’s at least adjacent to it… so I’ll add that to the list of cults lesswrong is responsible for. So all in all, we’ve got the Zizians, Leverage Research, and now Maple for proper cults, and stuff like Dragon Army and Michael Vassar’s groupies for “high demand” groups. It really is a cult incubator.
I actually think “Project Lawful” started as Eliezer having fun with glowfic (he has a few other attempts at glowfics that aren’t nearly as wordy… one of them actually almost kind of pokes fun at himself and lesswrong), and then as it took off and the plot took the direction of “his author insert gives lectures to an audience of adoring slaves” he realized he could use it as an opportunity to squeeze out all the Sequence content he hadn’t bothered writing up in the past decade^ . And that’s why his next attempt at a HPMOR-level masterpiece is an awkward to read rp featuring tons of adult content in a DnD spinoff, and not more fanfiction suitable for optimal reception to the masses.
^(I think Eliezer’s writing output dropped a lot in the 2010s compared to when he was writing the sequences and the stuff he has written over the past decade is a lot worse. Like the sequences are all in bite-size chunks, and readable in chunks in sequence, and often rephrase legitimate science in a popular way, and have a transhumanist optimism to them. Whereas his recent writings are tiny little hot takes on twitter and long, winding, rants about why we are all doomed on lesswrong.)
Yeah, even if computers predicting other computers didn’t require overcoming the halting problem (and thus contradict the foundations of computer science) actually implementing such a thing with computers smart enough to qualify as AGI in a reliable way seems absurdly impossible.
Weird rp wouldn’t be sneer worthy on it’s own (although it would still be at least a little cringe), it’s contributing factors like…
the constant IQ fetishism (Int is superior to Charisma but tied with Wis and obviously a true IQ score would be both Int and Wis)
the fact that Eliezer cites it like serious academic writing (he’s literally mentioned it to Yann LeCunn in twitter arguments)
the fact that in-character lectures are the only place Eliezer has written up many of his decision theory takes he developed after the sequences (afaik, maybe he has some obscure content that never made it to lesswrong)
the fact that Eliezer think it’s another HPMOR-level masterpiece (despite how wordy it is, HPMOR is much more readable, even authors and fans of glowfic usually acknowledge the format can be awkward to read and most glowfics require huge amounts of context to follow)
the fact that the story doubles down on the HPMOR flaw of confusion of which characters are supposed to be author mouthpieces (putting your polemics into the mouths of character’s working for literal Hell… is certainly an authorial choice)
and the continued worldbuilding development of dath ilan, the rationalist utopia built on eugenics and censorship of all history (even the Hell state was impressed!)
…At least lintamande has the commonsense understanding of why you avoid actively linking your bdsm dnd roleplay to your irl name and work.
And it shouldn’t be news to people that KP supports eugenics given her defense of Scott Alexander or comments about super babies, but possibly it is and headliner of weird roleplay will draw attention to it.
To be fair to DnD, it is actually more sophisticated than the IQ fetishists, it has 3 stats for mental traits instead of 1!
If your decision theory can’t address weird totally plausible in the near future hypotheticals with omniscient God-AIs offering you money in boxes if you jump through enough cognitive hoops, what is it really good for?
It’s always the people you most expect.
It’s pretty screwed up that humble bragging about putting their own mother out of a job is a useful opening to selling a scam-service. At least the people that buy into it will get what they have coming?
Nice job summarizing the lore in only 19 minutes (I assume this post was aimed at providing full context to people just joining or at least relatively new to tracking all this… stuff).
Some snarky comments, not because it wasn’t a good summary that should have included them (all the asides you could add could easily double the length and leave a casual listener/reader more confused), but because I think they are funny and I need to vent
You’ll see him quoted in the press as an “AI researcher” or similar.
Or decision theorist! With an entire one decision theory paper that he didn’t bother getting through peer review because the reviewers wanted, like actual context, and an actual decision theory and not just hand waves at paradoxes on the fringes of decision theory.
What Yudkowsky actually does is write blog posts.
He also writes fanfiction!
I’m not even getting to the Harry Potter fanfic, the cult of Ziz, or Roko’s basilisk today!
Yeah this rabbit hole is deep.
The goal of LessWrong rationality is so Eliezer Yudkowsky can live forever as an emulated human mind running on the future superintelligent AI god computer, to end death itself.
Yeah in hindsight the large number of ex-Christians it attracts makes sense.
And a lot of Yudkowsky’s despair is that his most devoted acolytes heard his warnings “don’t build the AI Torment Nexus, you idiots” and they all went off to start companies building the AI Torment Nexus.
He wrote a lot of blog posts about how smart and powerful the Torment Nexus would be, and how we really need to build the Anti-Torment Nexus, so if he had proper skepticism of Silicon Valley and Startup/VC Culture, he really should have seen this coming
There was also a huge controversy in Effective Altruism last year when half the Effective Altruists were shocked to discover the other half were turbo-racists who’d invited literal neo-Nazis to Effective Altruism conferences. The pro-racism faction won.
I was mildly pleasantly surprised to see there was a solid half pushing back in the comments in the response to the first manifest, but it looks like the anti-racism faction didn’t get any traction to change anything and the second manifest conference was just as bad or worse.
I think the problem is that the author doesn’t want to demonize any of those actual ideologies that oppose TESCREALism either explicitly or incidentally because they’re more popular and powerful and because rather than being foundationally opposed to “Progress” as he defines it they have their own specific principles that are harder to dismiss.
This is a good point. I’ll go even further and say a lot of the component ideologies of anti-TESCREALISM are stuff that this author might (at least nominally claim to) be in favor of so they can’t name the specific ideologies.
I feel like lesswrong’s front page has what would be a neat concept in a science fiction story at least once a week. Like what if an AGI had a constant record of it’s thoughts, but it learned to hide what it was really thinking in them with complex stenography! That’s a solid third act twist of at least a B sci-fi plot, if not enough to carry a good story by itself. Except lesswrong is trying to get their ideas passed in legislation and they are being used as the hype wing of the latest tech-craze. And they only occasionally write actually fun stories, as opposed to polemic stories beating you over the head with their moral or ten thousand word pseudo-academic blog posts.
That’s true. “Passing itself off as scientific” also describes Young Earth Creationism and Intelligent Design and various other pseudosciences. And in terms of who is pushing pseudoscience… the curent US administration is undeniably right-wing and opposed to all mainstream science.
Also, I would at least partially disagree with this:
Very few of the people making this argument are militant atheists who consider religion bad in of itself.
I would identify as an atheist, if not a militant one. And looking at Emile Torres’ Wikipedia page, he is an atheist also. Judging by the uncommon occasions it comes up on sneerclub, I think a lot of us are atheist/agnostic. Just not, you know, “militant”. And in terms of political allegiance, a lot of the libertarians on lesswrong are excited for the tax cuts and war on woke of the Trump administration even if it means cutting funding to all science and partnering up with completely batshit Fundamenalist Evangelicals.
Apparently Eliezer is actually against throwing around P(doom) numbers: https://www.lesswrong.com/posts/4mBaixwf4k8jk7fG4/yudkowsky-on-don-t-use-p-doom ?
The objections to using P(doom) are relatively reasonable by lesswrong standards… but this is in fact once again all Eliezer’s fault. He started a community centered around 1) putting overconfident probability “estimates” on subjective uncertain things 2) need to make a friendly AI-God, he really shouldn’t be surprised that people combine the two. Also, he has regularly expressed his certainty that we are all going to die to Skynet in terms of ridiculously overconfident probabilities, he shouldn’t be surprised that other people followed suit.