

sharing this channel’s posts are the equivalent of shooting fish in a barrel but http://youtube.com/post/UgkxoSpDpLNEr9WawVXnl5Mlw4NeQ6-XsLjl this really just feels like an excuse to repost that METR graph. also wtf is the graph on top


sharing this channel’s posts are the equivalent of shooting fish in a barrel but http://youtube.com/post/UgkxoSpDpLNEr9WawVXnl5Mlw4NeQ6-XsLjl this really just feels like an excuse to repost that METR graph. also wtf is the graph on top


it’s always the Elon Musk fans isnt it.
and on the topic of Futurism articles on Elon Musk: https://futurism.com/future-society/court-trouble-jury-hates-elon-musk
one word: LMFAOOOO


oh yeah I 100% agree that their methodology is flawed, and that blog does a pretty good job of outlining the issues. I just thought the absolutely huge gap was both interesting and funny. Their absolutely huge error bars are not a good sign, between that and the gap it really feels like someone screwed up


the metr graph has gotten weird https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/ the 50% success rate graph went from 6 hours to 14 hours, but the 80% success rate graph only went from 55 minutes to 1 hour and 3 minutes. I have an itch that it’s a fluke or outlier but it’s also very possible that LLM coding’s just weird like that


deleted by creator


the AI safety crowd cuts Anthropic way too much slack. Oh, they’re not running CSAM-generating MechaHitler? Oh, they’re not collaborating with the US government to recreate 1984? I’m so proud of them for doing the bare minimum. They still took donations from the UAE and Qatar (something Dario Amodei himself admitted was going to hurt a lot of people, but he took the donations anyways because “they couldn’t miss out on all those valuations”), they still downloaded hundreds of pirated content to train their chatbot. They’re still doing shady shit, don’t let them off the hook because they’re slightly less evil than the competition


Damn, she went in on Yud (not that he doesn’t deserve it)
I also found this article about the doc in the replies of that post https://buttondown.com/maiht3k/archive/a-tale-of-two-ai-documentaries/ and it’s a very interesting read


Update: the screenshot is unfortunately not LLM generated, found the full version on Reddit SneerClub https://web4.ai/


New “AI is not a bubble” video just dropped https://youtu.be/wDBy2bUICQY a lot of skeptical comments pointing out the flaws in this argument while the creator tries to defend themselves with mostly mediocre lines


I took a deeper look into the documentary, and it does go into both the pessimist and optimist perspectives, so their inclusion makes more sense. and yeah, I was trying to get at how they’re skeptical of the TESCREAL stuff and of current LLM capabilities


I poked around the IMDB page, and there are reviews! currently it’s sitting at a 8.5/10 with 31 ratings (though no written reviews it seems like) the metacritic score is a 51/100 with 4 reviews and there are 4 external reviews


HOLY SHIT LMFAOOOOO


Sam Altman and the other CEOS being there is such a joke “this technology is so dangerous guys! of course I’m gonna keep blocking regulation for it, I need to make money after all!” Also, I’m shocked Emily Bender and Timmit Gebru are there, aren’t they AI skeptics?


Surprised it’s a term they stole and not one they made up. But yeah the whole idea of “AGI will solve all our problems” is just silly


huh, you’re right. usually this channel provides a source for the things they share, but this time there’s nothing.


the full paper is here: https://x.com/alexwg/status/2022292731649777723 immediately two references to Nick Bostrom and Scott Alexander


can’t believe scammers are loosing their jobs to AI


Im pretty sure most of this has already been posted to this thread (I know the “AI published a hit piece on me” thing was)but more Moltbook/Openclaw/whatever-it’s-called nonsense
this is like the fourth time an AI agent has completely deleted something important (I remember an article about an AI deleting all of a scientists’s research) How many more times does it have to happen before people stop using AI to look after something important???