“doesn’t work” doesn’t mean the AI literally does not produce any output or do anything, it means it has so many flaws it’s just a fundamentally bad technology to be using.
And don’t worry, I’ve got sources.
LLMs still routinely hallucinate, and even implementations being used by AI safety researchers can’t help but automatically wipe email inboxes without permission. They atrophy your brain the longer you use them, cause both general dependency and emotional dependency, as well as deskill you at your job, they produce content favored worse by both humans and the AI models searching for trustworthy sources, and to top it all off, scaling laws are already failing to improve AI models enough to fix these problems, companies aren’t seeing returns, the economy gained essentially nothing from AI investment, usage, and growth, and public perception by the people actually affected most by AI is only getting worse while the people financially incentivized to keep building it say it’s going to get better, all while datacenters accelerate global warming and LLMs keep killing people.
I don’t know about you, but I’d rather not support a technology that makes you get fundamentally worse at most cognitive tasks, damages the planet, burns money that could otherwise go to something more valuable, all while randomly killing mentally vulnerable people.
“doesn’t work” doesn’t mean the AI literally does not produce any output or do anything, it means it has so many flaws it’s just a fundamentally bad technology to be using.
The doctors who used it daily said it worked fine, and it did. Then those doctors became 20% less capable at identifying tumors in their patients.
The Meta AI security researcher literally said, and I quote: “It’s been working well with my non-important email very well so far and gained my trust on email tasks” when asked why she’d give it access to her primary email, where it subsequently started trashing her whole inbox.
All of the participants in the cognitive debt paper’s research had the AI actually produce the results they were looking for, but they all became less capable mentally as a result.
And when a woman in South Korea killed two men using advice given to her by ChatGPT, it worked fine for her, didn’t it?
That’s not to say your use of AI makes you a murderer. Far from it. But we have quite well documented evidence of LLMs simply making people dumber. You are not an exception to that, unless your brain biologically operates entirely differently from everyone else’s.
When you use neurons less, the connections become weaker, and less new connections get made. When you offload work to something else, like an LLM, you stop training your brain to get better, and you let parts of it slowly die.
Using AI is like using a hydraulic robot to bench press for you. You’re going to move the weights, but your muscle mass ain’t growing.
The more you outsource the very function of thinking to a chatbot, the more reliant your brain will become on that chatbot to think as well as it used to, and when that chatbot regularly hallucinates faulty answers and logic, ignores best practices, inefficiently implements solutions, and gets things wrong, your brain is not improving as a result of that.
This doesn’t mean you should never use AI. I use it to automatically clean up the transcriptions of my voice notes sometimes, and all that does is save me time from correcting the output of the text I just spoke. It’s genuinely helpful, and doesn’t meaningfully deskill me in any way. But if I used it to try and do everything for me, not only would it have made a ton of mistakes, but I’d then be even less capable of fixing them.
I use it to automatically clean up the transcriptions of my voice notes sometimes, and all that does is save me time from correcting the output of the text I just spoke. It’s genuinely helpful, and doesn’t meaningfully deskill me in any way.
But still, it does deskill you at that task, lest we forget. So if that was a meaningful task at which you wanted to stay adept, you would lose that meaningful skill. AI consistently deskills us at everything we ask it to do instead of doing it ourselves. Anything we are not doing, we are getting worse at doing.
“doesn’t work” doesn’t mean the AI literally does not produce any output or do anything, it means it has so many flaws it’s just a fundamentally bad technology to be using.
And don’t worry, I’ve got sources.
LLMs still routinely hallucinate, and even implementations being used by AI safety researchers can’t help but automatically wipe email inboxes without permission. They atrophy your brain the longer you use them, cause both general dependency and emotional dependency, as well as deskill you at your job, they produce content favored worse by both humans and the AI models searching for trustworthy sources, and to top it all off, scaling laws are already failing to improve AI models enough to fix these problems, companies aren’t seeing returns, the economy gained essentially nothing from AI investment, usage, and growth, and public perception by the people actually affected most by AI is only getting worse while the people financially incentivized to keep building it say it’s going to get better, all while datacenters accelerate global warming and LLMs keep killing people.
I don’t know about you, but I’d rather not support a technology that makes you get fundamentally worse at most cognitive tasks, damages the planet, burns money that could otherwise go to something more valuable, all while randomly killing mentally vulnerable people.
that’s odd i use it daily and it works fine
The doctors who used it daily said it worked fine, and it did. Then those doctors became 20% less capable at identifying tumors in their patients.
The Meta AI security researcher literally said, and I quote: “It’s been working well with my non-important email very well so far and gained my trust on email tasks” when asked why she’d give it access to her primary email, where it subsequently started trashing her whole inbox.
All of the participants in the cognitive debt paper’s research had the AI actually produce the results they were looking for, but they all became less capable mentally as a result.
And when a woman in South Korea killed two men using advice given to her by ChatGPT, it worked fine for her, didn’t it?
That’s not to say your use of AI makes you a murderer. Far from it. But we have quite well documented evidence of LLMs simply making people dumber. You are not an exception to that, unless your brain biologically operates entirely differently from everyone else’s.
When you use neurons less, the connections become weaker, and less new connections get made. When you offload work to something else, like an LLM, you stop training your brain to get better, and you let parts of it slowly die.
Using AI is like using a hydraulic robot to bench press for you. You’re going to move the weights, but your muscle mass ain’t growing.
The more you outsource the very function of thinking to a chatbot, the more reliant your brain will become on that chatbot to think as well as it used to, and when that chatbot regularly hallucinates faulty answers and logic, ignores best practices, inefficiently implements solutions, and gets things wrong, your brain is not improving as a result of that.
This doesn’t mean you should never use AI. I use it to automatically clean up the transcriptions of my voice notes sometimes, and all that does is save me time from correcting the output of the text I just spoke. It’s genuinely helpful, and doesn’t meaningfully deskill me in any way. But if I used it to try and do everything for me, not only would it have made a ton of mistakes, but I’d then be even less capable of fixing them.
But still, it does deskill you at that task, lest we forget. So if that was a meaningful task at which you wanted to stay adept, you would lose that meaningful skill. AI consistently deskills us at everything we ask it to do instead of doing it ourselves. Anything we are not doing, we are getting worse at doing.