• mindbleach@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      3 hours ago

      Oh no, statistical modeling about published works allows weird new shit. We must ban this entire class of software because we all care so deeply about copyright.

  • Dharma Curious (he/him)@slrpnk.net
    link
    fedilink
    arrow-up
    28
    ·
    1 day ago

    Okay, help me out here. I’ve heard people talking about open source ai models, and it always seems like open source needs big ass air quotes. Are there any open source models that are actually open source in the way people generally think of the term?

    • mindbleach@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      4 hours ago

      Do such models exist? Yes. Are they the big-boy models anyone’s really using? Ehhh not really.

      There are in-use models that are “here’s a thing do whatever good luck,” which is at least as open-source as any MIT project. (Permissive licenses being “here is the code, have a nice life.”) Very few models are properly reproducible, because even when their training data includes DVDs you probably own, it also includes a ton of random internet pages that maybe don’t exist anymore. The push for ever-larger models, trained on as much stuff as possible, makes the use of “open source” regrettable or even deceptive choice. But quite a few are unrestricted for whatever weird shit you want to get up to.

    • Radiant_sir_radiant@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      17 hours ago

      The closest one to true FOSS that I’m aware of is Apertus. Not sure whether it’s feasible to build anything meaningful from scratch without your own GPU farm though.

    • dovah@lemmy.world
      link
      fedilink
      arrow-up
      34
      ·
      1 day ago

      Here’s a list of open source models: open-llms

      Models are only open source if the weights are freely available along with the code used to generate them.

      • Dharma Curious (he/him)@slrpnk.net
        link
        fedilink
        arrow-up
        7
        ·
        1 day ago

        I really appreciate that! I was asking more for the information of it, I doubt I could do anything with the link. Lol. I don’t understand thing 1 about this stuff. I don’t even know wtf a weight is in this context lol

        • edinbruh@feddit.it
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          18 hours ago

          In this context “weight” is a mathematical term. Have you ever heard the term “weighted average”? Basically it means calculating an average where some elements are more “influent/important” than others, the number that indicates the importance of an element is called a weight.

          One oversimplification of how any neural network work could be this:

          • The NN receives some values in input
          • The NN calculates many weighted averages from those values. Each average uses a different list of weights.
          • The NN does a simple special operation on each average. It’s not important what the operation actually is, but it must be there. Without this, every NN would be a single layer. It can be anything except sums and multiplications
          • The modified averages are the input values for the next layer.
          • Each layer has different lists of weights.
          • In reality this is all done using some mathematical and computational tricks, but the basic idea is the same.

          Training an AI means finding the weights that give the best results, and thus, for an AI to be open-source, we need both the weights and the training code that generated them.

          Personally, I feel that we should also have the original training data itself to call it open source, not just weights and code.

          • MrMcGasion@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            18 hours ago

            Absolutely agree that to be called open source the training data should also be open. It would also pretty much mean that true open source models would be ethically trained.

          • Dharma Curious (he/him)@slrpnk.net
            link
            fedilink
            arrow-up
            1
            ·
            17 hours ago

            Thank you!

            And yeah, it really does seem like the training data should be open. Like, not even just to be considered open source, just to be allowed to do this at all, ethically, the training data should be known, at least to some degree. Like, there’s so much shit out there, knowing what they trained on would help make some kind of ethical choice in using it

      • veroxii@aussie.zone
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        And as I understand it these Chinese “open source” models are only the weights? No way to “compile” your own version.

        • dovah@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          16 hours ago

          I’m not sure what you mean about Chinese models, but you can find the code used for training. Open Llama, for example, gives you the weights, the data, and the code used for training. You can do everything yourself, if you wanted to. The hardest part is getting the appropriate hardware.

    • Jankatarch@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      18 hours ago

      I mean you could give the randomizer seed along with the code for training I guess that would count kinda?

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    31
    ·
    1 day ago

    Apparently Hunyuan just released some big-ass video model, and it’s air-quotes “open source” with a bunch of finger-wag restrictions. One of them is ‘you may not train your thing on our thing.’

    Yeah I’m sure the companies that shrug off copyright concerns for Disney movies give a shit about Tencent’s pre-laundered intellectual property.