

Eliezer is trying to get around that with some weird conditions and game on the prediction market question:
This market resolves N/A on Jan 1st, 2027. All trades on this market will be rolled back on Jan 1st, 2027. However, up until that point, any profit or loss you make on this market will be reflected in your current wealth; which means that purely profit-interested traders can make temporary profits on this market, and use them to fund other permanent bets that may be profitable; via correctly anticipating future shifts in prices among people who do bet their beliefs on this important question, buying low from them and selling high to them.
I don’t think that actually helps. But Eliezer is committed to prediction markets being useful on a nearly ideological level, so he has to try to come up with weird complicated strategies to try to get around their fundamental limits.



Rationalist Infighting!
tldr; one of the MIRI aligned rationalist (Rob Bensinger) complained about how EA actually increased AI-risk long-run by promoting OpenAI and then Anthropic. Scott Alexander responded aggressively, basically saying they are entirely wrong and also they are bad at public communications! Various lesswrongers weigh in, seemingly blind to irony and hypocrisy!
Some highlights from the quotes of the original tweets and the lesswronger comments on them:
Scott Alexander tries blaming Eliezer for hyping up AI and thus contributing to OpenAI in the first place. Just a reminder, Scott is one of the AI 2027 authors, he really doesn’t have room to complain about rationalist creating crit-hype.
Scott Alexander tries claiming SBF was a unique one off in the rationalist/EA community! (Anthropic’s leadership has been called out on the EA forums and lesswrong for a similar pattern of repeated lying)
Rob Bensinger is indirectly trying to claim Eliezer/MIRI has been serious forthright honest commentators on AI theory and policy, as opposed to Open-Phil/EA/Anthropic which have been “strategic” with their public communication, to the point of dishonesty.
habryka is apparently on the verge of crashing out? I can’t tell if they are planning on just quitting twitter or quitting their attempts at leadership within the rationalist community. Quitting twitter is probably a good call no matter what.
Load of tediously long posts, mired with that long-winded rationalist way of talking, full of rationalist in-group jargon for conversations and conflict resolution
Disagreement on whether Ilya Sutskever’s $50 billion dollar startup is going to contribute to AI safety or just continue the race to AGI.
Arguments over who is with the EAs vs. Open Philanthropy vs. MIRI!
Argument over the definition of gaslighting!
To be clear, I agree with the complaints about EA and Anthropic, I just also think MIRI has its own similar set of problems. So they are both right, all of the rationalists are terrible at pursing their alleged nominal goals of stopping AI Doom.
I did sympathize with one lesswronger’s comment: