• cyd@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    1 year ago

    Strange that they don’t just use an open weights model; there are several now that surpass ChatGPT 3.5, which is probably good enough for what they need.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      14
      ·
      1 year ago

      Might be that they started training before those open models were available. Or they were just lazy and OpenAI’s API was easier.

      • cyd@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        Mistral 7B and deepseek-ai are two open-weight models that surpass 3.5, though not 4, on several measures.

      • 4onen@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        Mixtral 8x7B, just out. Codes better than ChatGPT in the few prompts I’ve done so far, and I can run it at 2 to 3 tokens per second on my GPU-less laptop.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    1 year ago

    This is the best summary I could come up with:


    TikTok’s entrancing “For You” feed made its parent company, ByteDance, an AI leader on the world stage.

    But that same company is now so behind in the generative AI race that it has been secretly using OpenAI’s technology to develop its own competing large language model, or LLM.

    This practice is generally considered a faux pas in the AI world.

    It’s also in direct violation of OpenAI’s terms of service, which state that its model output can’t be used “to develop any artificial intelligence models that compete with our products and services.” Microsoft, which ByteDance is buying its OpenAI access through, has the same policy.

    Nevertheless, internal ByteDance documents shared with me confirm that the OpenAI API has been relied on to develop its foundational LLM, codenamed Project Seed, during nearly every phase of development, including for training and evaluating the model.

    Employees involved are well aware of the implications; I’ve seen conversations on Lark, ByteDance’s internal communication platform for employees, about how to “whitewash” the evidence through “data desensitization.” The misuse is so rampant that Project Seed employees regularly hit their max allowance for API access.


    The original article contains 187 words, the summary contains 187 words. Saved 0%. I’m a bot and I’m open source!