- cross-posted to:
- futurology@chat.maiion.com
- cross-posted to:
- futurology@chat.maiion.com
James Cameron on AI: “I warned you guys in 1984 and you didn’t listen”::undefined
James Cameron on AI: “I warned you guys in 1984 and you didn’t listen”::undefined
Yes, sure. I meant things like employment, quality of output
That applies to… literally every invention in the world. Cars, automatic doors, rulers, calculators, you name it…
With a crucial difference - inventors of all those knew how the invention worked. Inventors of current AIs do NOT know the actual mechanism how it works. Hence, output is unpredictable.
Lol could you provide a source where the people behind these LLMs say they don’t know how it works?
Did they program it with their eyes closed?
Yes I can. example
Opposed to other technology, nobody knows the internal structure. Input A does not necessarily produce output B.
Whether you like it or not is irrelevant.
“Whether you like it or not is irrelevant.”
That’s a very hostile take.
I just think it’s wild they wouldn’t know how it works when they’re the ones who created it. How do you program something that you don’t understand?! It’s crazy.
It is, sorry. It was a Reaction to the downvotes. But at this point I’m a bit allergic to the “it’s the same as every other invention” argument. It’s not, precisely for this reason. It’s a bit like “climate is always changing” - yes, but not within decades or centuries. These details are crucial.
Basically with neural networks you program the way it injests data and how it outputs data. Everything else in between is constantly updating statistical algorithms. Developers can look at those algorithms, but it’s extremely hard to map that back out into human readable content.
they program it to learn. They can tell you exactly how it learns, but not what it learned (there are some techniques to give some small insights, but not even close to the full picture)
Problem is, how it behaves nepends on how it was programmed and what it learned after being trained. Since what it learned is a black box, we cannot explain their behaviour