• subnormal@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    18 hours ago

    I wasn’t clear. What I meant was: what sane things could a fascist military use AI for?

    “Reactionary” lmao. My friend, I use LLMs all the time. Just not the proprietary ones from companies that are in bed with fascists.

    • BJW@lemmus.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      Your problem is clearly with the fascists, as it should be, and AI is getting caught in the crossfire by your ire. You just can’t see/admit it yet.

      Unless you live in a cave, which you obviously don’t since you’re here on the Internet sharing your wisdom with us, then you are participating in business and activities that enrich the fascists. It’s just a fact of life when they own everything. There is no ethical consumption under capitalism.

      • subnormal@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        0
        ·
        10 hours ago

        I have nothing against AI but everything against a certain AI company that is fully in bed with fascists.

        There is no ethical consumption under capitalism.

        Please do not use this slogan as an excuse to not sought out the least unethical option for your consumptions.

        • BJW@lemmus.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          8 hours ago

          I have nothing against AI but everything against a certain AI company that is fully in bed with fascists.

          Are you talking about Google? Apple? Meta? Twitter? Microsoft? OpenAI?

          You can’t be talking about the one company that was banned by the fascist government for not complying with their demands, because a company fully in bed with fascists would not be banned for refusing to comply. Yet, it seems in your confusion that is exactly what you’re implying.

          Please do not use this slogan as an excuse to not sought out the least unethical option for your consumptions.

          I don’t, and that would be Anthropic’s Claude. I don’t know about you, but I don’t have the hardware for a local LLM at the speed or proficiency they offer. Maybe you’re so fortunate, and are judging the choices of the less fortunate for not passing your purity test?

          • subnormal@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            ·
            8 hours ago

            You must be American. I am talking about Kimi, Mistral, GLM, Liquid, Minimax, Arcee, Qwen, Deepseek, Xiaomi.

            And you are of course allowed to use cloud inference if you don’t have the hardware to run locally. Just choose an inference service that is not in bed with fascists. There are plenty. Good luck and have a nice day.

            • BJW@lemmus.org
              link
              fedilink
              English
              arrow-up
              3
              ·
              7 hours ago

              Sadly.

              I only recognize the last two of those. Which fascists are you referring to? There’s too many in the world right now to keep track. Since we’re talking about Claude being the least evil option, I’m not sure how those other companies tie in to the conversation.

              Which would you recommend? I’m curious where your recommendation fall in the rankings of usefulness.

              • subnormal@lemmy.dbzer0.com
                link
                fedilink
                arrow-up
                1
                ·
                4 hours ago

                The companies I listed, as far as I can tell, are not supplying or trying to supply any fascist military. Some (like Alibaba who made Qwen) are obviously unethical in other ways. But all these companies make open-weight models so if they are unethical you can always avoid contributing to them by selfhosting or going to third-party inference providers.

                Kimi is my favorite. But you can try GLM going by your leaderboard website.