

That IQ after a certain level somehow turns into mana points is a core rationalist assumption about how intelligence works.
It’s not always easy to distinguish between existentialism and a bad mood.
That IQ after a certain level somehow turns into mana points is a core rationalist assumption about how intelligence works.
Nice to know even pre-LLM AI techniques remain eminently fuckupable if you just put your mind to it.
Didn’t mean to imply otherwise, just wanted to point out that the call is coming from inside the house.
He claims he was explaining what others believe not what he believes
Others as in specifically his co-writer for AI2027 Daniel Kokotlajo, the actual ex-OpenAI researcher.
I’m pretty annoyed at having this clip spammed to several different subreddits, with the most inflammatory possible title, out of context, where the context is me saying “I disagree that this is a likely timescale but I’m going to try to explain Daniel’s position” immediately before. The reason I feel able to explain Daniel’s position is that I argued with him about it for ~2 hours until I finally had to admit it wasn’t completely insane and I couldn’t find further holes in it.
Pay no attention to this thing we just spent two hours exhaustively discussing that I totally wasn’t into, it’s not really relevant context.
Also the title is inflammatory only in the context of already knowing him for a ridiculous AI doomer, otherwise it’s fine. Inflammatory would be calling the video economically illiterate bald person thinks evaluations force-buy car factories, China having biomedicine research is like Elon running SpaceX .
(Are there multiple ai Nobel prize winners who are ai doomers?)
There’s Geoffrey Hinton I guess, even if his 2024 Nobel in (somehow) Physics seemed like a transparent attempt at trend chasing on behalf of the Nobel committee.
Also, add obvious and overdetermined to the pile of siskindisms next to very non-provably not-correct.
Scoot makes the case that agi could have murderbot factories up and running in a year if it wanted to https://old.reddit.com/r/slatestarcodex/comments/1kp3qdh/how_openai_could_build_a_robot_army_in_a_year/
edit: Wrote it up
What is the analysis tool?
The analysis tool is a JavaScript REPL. You can use it just like you would use a REPL. But from here on out, we will call it the analysis tool.
When to use the analysis tool
Use the analysis tool for:
- Complex math problems that require a high level of accuracy and cannot easily be done with “mental math”
- To give you the idea, 4-digit multiplication is within your capabilities, 5-digit multiplication is borderline, and 6-digit multiplication would necessitate using the tool.
uh
You run CanadianGirlfriendGPT, got it.
If LLM hallucinations ever become a non-issue I doubt I’ll be needing to read a deeply nested buzzword laden lemmy post to first hear about it.
copilot assisted code
The article isn’t really about autocompleted code, nobody’s coming at you for telling the slop machine to convert a DTO to an html form using reactjs, it’s more about prominent CEO claims about their codebases being purely AI generated at rates up to 30% and how swengs might be obsolete by next tuesday after dinner.
Ask chatgpt to explain it to you.
Seriously, don’t generate an array unless explicitly asked for it. Please.
Peak prompt engineering right there.
To get a bit meta for a minute, you don’t really need to.
The first time a substantial contribution to a serious issue in an important FOSS project is made by an LLM with no conditionals, the pr people of the company that trained it are going to make absolutely sure everyone and their fairy godmother knows about it.
Until then it’s probably ok to treat claims that chatbots can handle a significant bulk of non-boilerplate coding tasks in enterprise projects by themselves the same as claims of haunted houses; you don’t really need to debunk every separate witness testimony, it’s self evident that a world where there is an afterlife that also freely intertwines with daily reality would be notably and extensively different to the one we are currently living in.
I think most people will ultimately associate chatbots with corporate overreach rather rank-and-file programmers. It’s not like decades of Microsoft shoving stuff down our collective throat made people think particularly less of programmers, or think about them at all.
Given the volatility of the space I don’t think it could have been doing stuff much better, doubt it’s getting out of alpha before the bubble bursts and stuff settles down a bit, if at all.
Automatic pr generation sounds like something that would need a prompt and a ten-line script rather than langchain, but it also seems both questionable and unnecessary.
If someone wants to know an LLM’s opinion on what the changes in a branch are meant to accomplish they should be encouraged to ask it themselves, no need to spam the repository.
I just read the github issue comment thread he links, what an entitled chode.
Love that the laughing face reactions to his AI slop laden replies stung so much he ended up posting through it on his blog.
The coda is top tier sneer:
Maybe it’s useful to know that Altman uses a knife that’s showy but incohesive and wrong for the job; he wastes huge amounts of money on olive oil that he uses recklessly; and he has an automated coffee machine that claims to save labour while doing the exact opposite because it can’t be trusted. His kitchen is a catalogue of inefficiency, incomprehension, and waste. If that’s any indication of how he runs the company, insolvency cannot be considered too unrealistic a threat.
Microsoft’s Visual Studio says it’s going to incorporate coding ‘agents’ as soon as maybe the next minor version. I can’t really see them buying up car factories or beating pokemon, but agent- as an AI marketing term is definitely a part of the current hype cycle.