• wjs018@piefed.social
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    1
    ·
    18 hours ago

    The theory that the lead maintainer had (he is an actual software developer, I just dabble), is that it might be a type of reinforcement learning:

    • Get your LLM to create what it thinks are valid bug reports/issues
    • Monitor the outcome of those issues (closed immediately, discussion, eventual pull request)
    • Use those outcomes to assign how “good” or “bad” that generated issue was
    • Use that scoring as a way to feed back into the model to influence it to create more “good” issues

    If this is what’s happening, then it’s essentially offloading your LLM’s reinforcement learning scoring to open source maintainers.

    • HubertManne@piefed.social
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      1
      ·
      18 hours ago

      Thats wild. I don’t have much hope for llm’s if things like this is how they are doing things and I would not be surprised given how well they don’t work. Too much quantity over quality in training.