- cross-posted to:
- techtakes@awful.systems
- cross-posted to:
- techtakes@awful.systems
This is a misleading headline. The “scaling” used in the quote refers to the size of text data that the models get trained on. So now that the vast majority of written text is already in the training library, that means the training library can’t be scaled up any further than that.
BUT that doesn’t mean ai isn’t going to be able to improve and expand it’s capabilities. Increasing the training library size is a ‘quick’ and ‘easy’ way to improve ai output, but it’s not the only way. At a bare minimum there are a lot more potential ways that current ai tech can be applied than is currently commercial available, and those are very much already in the development stages.
I get that many of us don’t like the idea of ai, but we don’t do ourselves any favors by falling for misinformation just because it says what we want to hear
100%, the anti AI hype is as misinformed as the AI hype. We have so much work ahead of us to effectively utilize the current LLMs.
There are many companies right now
exploitingemploying contract workers to rate chatbot responses in order to improve the models. I have first hand experience with this work and let me tell you it’s…a complete fucking shitshow. I can’t imagine they’re getting much good data from it but they are absolutely throwing money at the problem hand over fist.
Wonder what will happen to all those data centers…
Well…
I mean, have they tried shoving it in every javascript running on every website yet?
I’ll bet there’s some throughput to be wrung out of it then.