tracyspcy@lemmy.ml to ChatGPT@lemmy.mlEnglish · 2年前Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study findsfortune.comexternal-linkmessage-square8fedilinkarrow-up174cross-posted to: hackernews@derp.foonlprog@lemmy.intai.technev@lemmy.intai.techtechnology@lemmy.worldtechnology@lemmy.world
arrow-up174external-linkOver just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study findsfortune.comtracyspcy@lemmy.ml to ChatGPT@lemmy.mlEnglish · 2年前message-square8fedilinkcross-posted to: hackernews@derp.foonlprog@lemmy.intai.technev@lemmy.intai.techtechnology@lemmy.worldtechnology@lemmy.world
minus-squaretaladar@sh.itjust.workslinkfedilinkEnglisharrow-up19·2年前A system that has no idea if what it is saying is true or false or what true or false even mean is not very consistent in answering things truthfully?
minus-squaretracyspcy@lemmy.mlOPlinkfedilinkEnglisharrow-up5·edit-22年前Wait for the next version which will be trained on data that includes gpt generated word salad
minus-squareintensely_human@lemm.eelinkfedilinkarrow-up1·2年前No that is not the thesis of this story. If I’m reading the headline correctly, the rate of its being correct has changed from one stable distribution to another one.
A system that has no idea if what it is saying is true or false or what true or false even mean is not very consistent in answering things truthfully?
Wait for the next version which will be trained on data that includes gpt generated word salad
Detroit: Become Human moment.
No that is not the thesis of this story. If I’m reading the headline correctly, the rate of its being correct has changed from one stable distribution to another one.