• FiniteBanjo@lemmy.today
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            3 months ago

            LLM and ML generated translations generate a series of tokens individually. That’s why AI Chatbots hallucinate so often, they decide the next most likely word in a sequence is “No” when the correct answer would be “Yes” and then the rest of the prompt devolves into convincing nonsense. Machines are incapable of any sort of critical thinking to discern correct from incorrect to decide whether to use contextual responses.

            • stephen01king@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              Those are not examples, just what you claim will happen based on what you think you understand about how LLM works.

              Show me examples of what you meant. Just run some translations in their AI translator or something and show me how often they make inaccurate translations. Doesn’t seem that hard to prove what you claimed.

              • FiniteBanjo@lemmy.today
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 months ago

                You want examples but you never disclosed which product you’re asking about, and why should I give a damn in the first place? I shouldn’t have to present an absence of evidence of it working to prove it doesn’t work.

                • stephen01king@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 months ago

                  Bruh, you were criticising a specific product and claiming they are providing wrong client-side translation. Why else would I be talking about a different product than the one you’re criticising?

                  And you’re making a claim, so of course you need to give a damn about proving your claim. It’s not someone else’s responsibility to prove what you say.

                  Proving the translations make mistakes is as simple as providing a few examples. I wasn’t asking you to prove they don’t make a mistake, which would require you to prove there is zero incidence of it making a wrong translation. What I asked is the exact opposite of an absence of evidence.

                  I can’t believe you’re using arguments that you don’t even understand just just to avoid proving your own claims. I’m starting to believe you have never even used Firefox’s AI translation and is just blindly claiming they provide wrong translations. What a waste of everyone’s time you’ve been.