RSS Bot@lemmy.bestiver.seMB to Hacker News@lemmy.bestiver.seEnglish · 1 month agoAI chatbots are "Yes-Men" that reinforce bad relationship decisions, study findsnews.stanford.eduexternal-linkmessage-square15fedilinkarrow-up147file-text
arrow-up147external-linkAI chatbots are "Yes-Men" that reinforce bad relationship decisions, study findsnews.stanford.eduRSS Bot@lemmy.bestiver.seMB to Hacker News@lemmy.bestiver.seEnglish · 1 month agomessage-square15fedilinkfile-text
minus-squaretheunknownmuncher@lemmy.worldlinkfedilinkEnglisharrow-up9·1 month agoIt’s not up to the user, but the provider that currates the dataset and trains the model
minus-squaredanh2os@piefed.sociallinkfedilinkEnglisharrow-up2·1 month agoI agree that the LLM dataset is in play here. What’s I’m saying is that we are the guide.
minus-squaretheunknownmuncher@lemmy.worldlinkfedilinkEnglisharrow-up7·edit-21 month ago What’s I’m saying is that we are the guide. The user has far less influence over how the model acts, though. No, we are not the guide.
minus-squaredanh2os@piefed.sociallinkfedilinkEnglisharrow-up1·1 month agoA hammer’s design is controlled by the manufacturer. But who’s responsible for what gets built with it? The person swinging it.
minus-squaretheunknownmuncher@lemmy.worldlinkfedilinkEnglisharrow-up6·1 month agoFalse equivalency. Hammers are not comparable to LLMs.
It’s not up to the user, but the provider that currates the dataset and trains the model
I agree that the LLM dataset is in play here. What’s I’m saying is that we are the guide.
The user has far less influence over how the model acts, though. No, we are not the guide.
A hammer’s design is controlled by the manufacturer. But who’s responsible for what gets built with it? The person swinging it.
False equivalency. Hammers are not comparable to LLMs.