Office space meme:

“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”

  • bleistift2
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    53
    ·
    edit-2
    3 days ago

    Uuuuh… why?

    Do you only accept open source code if you can see every key press every developer made?

    • Prunebutt@slrpnk.netOP
      link
      fedilink
      arrow-up
      83
      arrow-down
      9
      ·
      3 days ago

      Open source means you can recreate the binaries yourself. Neiter Facebook. Nor the devs of deepseek published which training data they used, nor their training algorithm.

      • magic_lobster_party@fedia.io
        link
        fedilink
        arrow-up
        37
        arrow-down
        9
        ·
        3 days ago

        They published the source code needed run the model. It’s open source in the way that anyone can download the model, run it locally, and further build on it.

        Training from scratch costs millions.

        • Zikeji@programming.dev
          link
          fedilink
          English
          arrow-up
          22
          arrow-down
          3
          ·
          edit-2
          2 days ago

          Open source isn’t really applicable to LLM models IMO.

          There is open weights (the model), and available training data, and other nuances.

          They actually went a step further and provided a very thorough breakdown of the training process, which does mean others could similarly train models from scratch with their own training data. HuggingFace seems to be doing just that as well. https://huggingface.co/blog/open-r1

          Edit: see the comment below by BakedCatboy for a more indepth explanation and correction of a misconception I’ve made

          • BakedCatboy@lemmy.ml
            link
            fedilink
            English
            arrow-up
            16
            arrow-down
            1
            ·
            3 days ago

            It’s worth noting that OpenR1 have themselves said that DeepSeek didn’t release any code for training the models, nor any of the crucial hyperparameters used. So even if you did have suitable training data, you wouldn’t be able to replicate it without re-discovering what they did.

            OSI specifically makes a carve-out that allows models to be considered “open source” under their open source AI definition without providing the training data, so when it comes to AI, open source is really about providing the code that kicks off training, checkpoints if used, and details about training data curation so that a comparable dataset can be compiled for replicating the results.

            • Zikeji@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              Thanks for the correction and clarification! I just assumed from the open-r1 post that they gave everything aside from the training data.

        • Fushuan [he/him]@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          The runner is open source, the model is not

          The service uses both so calling their service open source gives a false impression to 99,99% of users that don’t know better.

          • magic_lobster_party@fedia.io
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            2 days ago

            The model is as far as I know open, even for commercial use. This is in stark contrast with Meta’s models, which have (or had?) a bespoke community license restricting commercial use.

            Or is there anything that can’t be done with the DeepSeek model that I’m unaware of?

            • Fushuan [he/him]@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              2 days ago

              The model is open, it’s not open source!

              How is it so hard to understand? The complete source of the model is not open. It’s not a hard concept.

              Sorry if I’m coming of as rude but I’m getting increasingly frustrated at having to explain a simple combination of two words that is pretty self explanatory.

              • magic_lobster_party@fedia.io
                link
                fedilink
                arrow-up
                1
                arrow-down
                2
                ·
                2 days ago

                Ok I understand now why people are upset. There’s a disagreement with terminology.

                The source code for the model is open source. It’s defined in PyTorch. The source code for it is available with the MIT license. Anyone can download it and do whatever they want with it.

                The weights for the model are open, but it’s not open source, as it’s not source code (or an executable binary for that matter). No one is arguing that the model weights are open source, but there seem to be an argument against that the model is open source.

                And even if they provided the source code for the training script (and all its data), it’s unlikely anyone would reproduce the same model weights due to randomness involved. Training model weights is not like compiling an executable, because you’ll get different results every time.

                • Fushuan [he/him]@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 days ago

                  Hey, I have trained several models in pytorch, darknet, tensorflow.

                  With the same dataset and the same training parameters, the same final iteration of training actually does return the same weights. There’s no randomness unless they specifically add random layers and that’s not really a good idea with RNNs it wasn’t when I was working with them at least. In any case, weights should converge into a very similar point even if randomness is introduced or else the RNN is pretty much worthless.

        • Prunebutt@slrpnk.netOP
          link
          fedilink
          arrow-up
          16
          arrow-down
          4
          ·
          3 days ago

          They published the source code needed run the model.

          Yeah, but not to train it

          anyone can download the model, run it locally, and further build on it.

          Yeah, it’s about as open source as binary blobs.

          Training from scratch costs millions.

          So what? You still can gleam something if you know the dataset on which the model has been trained.

          If software is hard to compile, can you keep the source code closed and still call software “open source”?

          • magic_lobster_party@fedia.io
            link
            fedilink
            arrow-up
            1
            ·
            3 days ago

            I agree the bad part is that they didn’t provide the script to train the model from scratch.

            Yeah, it’s about as open source as binary blobs.

            This is a great starting point for further improvements of the model. Most AI research is done with pretrained weights used as basis. Few are training models completely from scratch. The model is built with Torch, so anyone should be able to fine tune the model on custom data sets.

        • serenissi@lemmy.world
          link
          fedilink
          arrow-up
          5
          arrow-down
          2
          ·
          edit-2
          3 days ago

          A software analogy:

          Someone designs a compiler, makes it open source. Make an open runtime for it. ‘Obtain’ some source code with unclear license. Compiles it with the compiler and releases the compiled byte code that can run with the runtime on free OS. Do you call the program open source? Definitely it is more open than something that requires proprietary inside use only compiler and closed runtine and sometimes you can’t access even the binary; it runs on their servers. It depends on perspective.

          ps: the compiler takes ages and costs mils in hardware.

          edit: typo

          • magic_lobster_party@fedia.io
            link
            fedilink
            arrow-up
            6
            arrow-down
            6
            ·
            3 days ago

            I think a more appropriate analogy is if you make an open source game. With the game you have made textures, because what is a game without textured surfaces? You include the binary jpeg images along with the source code.

            You’ve made the textures with photoshop, which is a closed source application. The textures also features elements of stock photos. You don’t provide the original stock photos.

            Anyone playing the game is free to replace the textures with their own. The game will have a different feel, but it’s still a playable game. Anyone is also free to modify the existing textures.

            Would you consider this game closed source?

            • Nonononoki@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              2 days ago

              Would an open-source Windows installer make it open-source? After all, you can replace its .dll files and modify the registry. I guess PrismLauncher also makes Minecraft open-source, you can replace the textures there as well.

              • magic_lobster_party@fedia.io
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                2 days ago

                If the installer is open source, then that part is open source. It’s maybe not as useful, because it relies on proprietary software to work. On the other hand, so does emulators like Dolphin.

                Windows is not open source just because it’s possible to change dll files. Minecraft is not open source just because it’s possible to modify its textures.

                Model weights isn’t the equivalent to a proprietary DLL or GameCube ROM. Anyone is free to modify and distribute the model weights however they like - and people are already doing it. Soon enough we will see variations of the model without the Chinese censor for example.

            • Ageroth@reddthat.com
              link
              fedilink
              arrow-up
              4
              arrow-down
              3
              ·
              3 days ago

              I’m going to take your point to the extreme.

              It’s only open source if the camera that took the picture that is used in the stock image that was used to create the texture is open source.
              You used a fully mechanical camera and chemical flash powder? Better publish that design patent and include the chemistry of the flash powder!

        • Oisteink@feddit.nl
          link
          fedilink
          arrow-up
          1
          ·
          3 days ago

          And looking at mobile games like Tacticus, there are loads of people with millions to burn on hobbies

      • ricecake@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        arrow-down
        4
        ·
        3 days ago

        Eh, it seems like it fits to me. We casually refer to all manner of data as “open source” even if we lack the ability to specifically recreate it. It might be technically more accurate to say “open data” but we usually don’t, so I can’t be too mad at these folks for also not.

        There’s huge deaths of USGS data that’s shared as open data that I absolutely cannot ever replicate.

        If we’re specifically saying that open source means you can recreate the binaries, then data is fundamentally not able to be open source, since it distinctly lacks any form of executable content.

        • Prunebutt@slrpnk.netOP
          link
          fedilink
          arrow-up
          3
          arrow-down
          3
          ·
          3 days ago

          If we’re specifically saying that open source means you can recreate the binaries, then data is fundamentally not able to be open source

          lol, are you claiming data isn’t reproducable? XD

          • ricecake@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            arrow-down
            3
            ·
            2 days ago

            … Did you not read the litteral next phrase in the sentence?

            since it distinctly lacks any form of executable content.

            Your definition of open source specified reproducible binaries. From context it’s clear that I took issue with your definition, not with the the notion of reproducing data.

            • Prunebutt@slrpnk.netOP
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              2 days ago

              Ok, then my definition givenwas too narrow, when I said “reproducable binaries”. If data claims to be “open source”, then it needs to supply information on how to reproduce it.

              Open data has other criteria, I’m sure.

              • ricecake@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                1 day ago

                my definition givenwas too narrow

                Yes, that’s what I said when you opted to take the first half of a sentence out of context.

                https://en.wikipedia.org/wiki/Open_data

                The common usage of open data is just that it’s freely shareable.
                Like I said in my initial comment, people frequently use “open source” to refer to it, but it’s such a pervasive error that it hardly worth getting too caught up on and practically doesn’t count as an error anymore.

                Some open data can’t be reproduced by anyone who has access to the data.

                • Prunebutt@slrpnk.netOP
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  1 day ago

                  I was specifically addressing the use of the phrase “open source”. And the term “open data” doesn’t apply either, since it’s not a dataset that’s distributed, but rather weights of an LLM with data baked into it. That’s neither open source nor open data.

      • kabi@lemm.ee
        link
        fedilink
        arrow-up
        37
        arrow-down
        1
        ·
        3 days ago

        Dude, the CPU instructions are right there, of course it’s open source.

        • Fushuan [he/him]@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 days ago

          The training data is NOT right there. If I can’t reproduce the results with the given data, the model is NOT open source.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        5
        ·
        3 days ago

        No, but I do call a CC licensed png file open source even if the author didn’t share the original layered Photoshop file.

        Model weights are data, not code.

        • breakingcups@lemmy.world
          link
          fedilink
          arrow-up
          16
          arrow-down
          1
          ·
          3 days ago

          You’d be wrong. Open source has a commonly accepted definition and a CC licensed PNG does not fall under it. It’s copyleft, yes, but not open source.

          I do agree that model weights are data and can be given a license, including CC0. There might be some argument about how one can assign a license to weights derived from copyrighted works, but I won’t get into that right now. I wouldn’t call even the most liberally licensed model weights open-source though.

    • BakedCatboy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      2
      ·
      edit-2
      3 days ago

      It really comes down to this part of the “Open Source” definition:

      The source code [released] must be the preferred form in which a programmer would modify the program

      A compiled binary is not the format in which a programmer would prefer to modify the program - it’s much preferred to have the text file which you can edit in a text editor. Just because it’s possible to reverse engineer the binary and make changes by patching bytes doesn’t make it count. Any programmer would much rather have the source file instead.

      Similarly, the released weights of an AI model are not easy to modify, and are not the “preferred format” that the internal programmers use to make changes to the AI mode. They typically are making changes to the code that does the training and making changes to the training dataset. So for the purpose of calling an AI “open source”, the training code and data used to produce the weights are considered the “preferred format”, and is what needs to be released for it to really be open source. Internal engineers also typically use training checkpoints, so that they can roll back the model and redo some of the later training steps without redoing all training from the beginning - this is also considered part of the preferred format if it’s used.

      OpenR1, which is attempting to recreate R1, notes: No training code was released by DeepSeek, so it is unknown which hyperparameters work best and how they differ across different model families and scales.

      I would call “open weights” models actually just “self hostable” models instead of open source.

      • bleistift2
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 days ago

        Thank you for the explanation. I didn’t know about the ‘preferred format’ definition or how AI models are changed at all.

        • General_Effort@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          3
          ·
          2 days ago

          It’s a lie. The preferred format is the (pre-)trained weights. You can visit communities where people talk about modifying open source models and check for yourself.

          • BakedCatboy@lemmy.ml
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            edit-2
            2 days ago

            That seems kind of like pointing to reverse engineering communities and saying that binaries are the preferred format because of how much they can do. Sure you can modify finished models a lot, but what you can do with just pre trained weights vs being able to replicate the final training or changing training parameters is just an entirely different beast.

            There’s a reason why the OSI stipulates that code and parameters used to train is considered part of the “source” that should be released in order to count as an open source model.

            You’re free to disagree with me and the OSI though, it’s not like there’s 1 true authority on what open source means. If a game that is highly modifiable and moddable despite the source code not being available counts as open source to you because there are entire communities successfully modding it, then all the more power to you.

      • plumbercraic@lemmy.sdf.org
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        3 days ago

        Thank you for taking the time to write this. Making the rests reproducable and possible to improve on is important.

      • barkingspiders@infosec.pub
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        2
        ·
        3 days ago

        This is exactly it, open source is not just the availability of the machine instructions, it’s also the ability to recreate the machine instructions. Anything less is incomplete.

        It strikes me as a variation on the “free as in beer versus free as in speech” line that gets thrown around a lot. These weights allow you to use the model for free and you are free to modify the existing weights but being unable to re-create the original means it falls short of being truly open source. It is free as in beer, but that’s it.