“doesn’t work” doesn’t mean the AI literally does not produce any output or do anything, it means it has so many flaws it’s just a fundamentally bad technology to be using.
The doctors who used it daily said it worked fine, and it did. Then those doctors became 20% less capable at identifying tumors in their patients.
The Meta AI security researcher literally said, and I quote: “It’s been working well with my non-important email very well so far and gained my trust on email tasks” when asked why she’d give it access to her primary email, where it subsequently started trashing her whole inbox.
All of the participants in the cognitive debt paper’s research had the AI actually produce the results they were looking for, but they all became less capable mentally as a result.
And when a woman in South Korea killed two men using advice given to her by ChatGPT, it worked fine for her, didn’t it?
That’s not to say your use of AI makes you a murderer. Far from it. But we have quite well documented evidence of LLMs simply making people dumber. You are not an exception to that, unless your brain biologically operates entirely differently from everyone else’s.
When you use neurons less, the connections become weaker, and less new connections get made. When you offload work to something else, like an LLM, you stop training your brain to get better, and you let parts of it slowly die.
Using AI is like using a hydraulic robot to bench press for you. You’re going to move the weights, but your muscle mass ain’t growing.
The more you outsource the very function of thinking to a chatbot, the more reliant your brain will become on that chatbot to think as well as it used to, and when that chatbot regularly hallucinates faulty answers and logic, ignores best practices, inefficiently implements solutions, and gets things wrong, your brain is not improving as a result of that.
This doesn’t mean you should never use AI. I use it to automatically clean up the transcriptions of my voice notes sometimes, and all that does is save me time from correcting the output of the text I just spoke. It’s genuinely helpful, and doesn’t meaningfully deskill me in any way. But if I used it to try and do everything for me, not only would it have made a ton of mistakes, but I’d then be even less capable of fixing them.
I use it to automatically clean up the transcriptions of my voice notes sometimes, and all that does is save me time from correcting the output of the text I just spoke. It’s genuinely helpful, and doesn’t meaningfully deskill me in any way.
But still, it does deskill you at that task, lest we forget. So if that was a meaningful task at which you wanted to stay adept, you would lose that meaningful skill. AI consistently deskills us at everything we ask it to do instead of doing it ourselves. Anything we are not doing, we are getting worse at doing.
that’s odd i use it daily and it works fine
The doctors who used it daily said it worked fine, and it did. Then those doctors became 20% less capable at identifying tumors in their patients.
The Meta AI security researcher literally said, and I quote: “It’s been working well with my non-important email very well so far and gained my trust on email tasks” when asked why she’d give it access to her primary email, where it subsequently started trashing her whole inbox.
All of the participants in the cognitive debt paper’s research had the AI actually produce the results they were looking for, but they all became less capable mentally as a result.
And when a woman in South Korea killed two men using advice given to her by ChatGPT, it worked fine for her, didn’t it?
That’s not to say your use of AI makes you a murderer. Far from it. But we have quite well documented evidence of LLMs simply making people dumber. You are not an exception to that, unless your brain biologically operates entirely differently from everyone else’s.
When you use neurons less, the connections become weaker, and less new connections get made. When you offload work to something else, like an LLM, you stop training your brain to get better, and you let parts of it slowly die.
Using AI is like using a hydraulic robot to bench press for you. You’re going to move the weights, but your muscle mass ain’t growing.
The more you outsource the very function of thinking to a chatbot, the more reliant your brain will become on that chatbot to think as well as it used to, and when that chatbot regularly hallucinates faulty answers and logic, ignores best practices, inefficiently implements solutions, and gets things wrong, your brain is not improving as a result of that.
This doesn’t mean you should never use AI. I use it to automatically clean up the transcriptions of my voice notes sometimes, and all that does is save me time from correcting the output of the text I just spoke. It’s genuinely helpful, and doesn’t meaningfully deskill me in any way. But if I used it to try and do everything for me, not only would it have made a ton of mistakes, but I’d then be even less capable of fixing them.
But still, it does deskill you at that task, lest we forget. So if that was a meaningful task at which you wanted to stay adept, you would lose that meaningful skill. AI consistently deskills us at everything we ask it to do instead of doing it ourselves. Anything we are not doing, we are getting worse at doing.