Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

  • webghost0101
    link
    fedilink
    arrow-up
    7
    arrow-down
    11
    ·
    edit-2
    2 months ago

    This is true if you describe a pure llm, like gpt3

    However systems like claude, gpt4o and 1o are far from just a single llm, they are a blend of tailored llms, machine learning some old fashioned code to weave it all together.

    Op does ask “modern llm” so technically you are right but i believed they did mean the more advanced “products”

    Though i would not be able to actually answer ops questions, ai is hard to directly compare with a human.

    In most ways its embarrassingly stupid, in other it has already surpassed us.

    • fartsparkles@sh.itjust.works
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      2 months ago

      None of which are intelligence, and all of which are catered towards predicting the next token.

      All the models have a total reliance on data and structure for inference and prediction. They appear intelligent but they are not.

      • webghost0101
        link
        fedilink
        arrow-up
        1
        arrow-down
        5
        ·
        edit-2
        2 months ago

        How is good old fashioned code comparing outputs to a database of factual knowledge “predicting the next token” to you. Or reinforcement relearning and token rewards baked into models.

        I can tell you have not actually tried to work with professional ai or looked at the research papers.

        Yes none of it is “intelligent” but i would counter that with neither are human beings, we dont even know how to define intelligence.

    • justOnePersistentKbinPlease@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      No, unfortunately you are wrong.

      Gpt4 is a better version of gpt3.

      The brand new one that is allegedly “unhackable” just has a role hierarchy providing rules and that hasn’t been fulled tested in the wild yet.

      • webghost0101
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 months ago

        First, did you read even the research papers?

        Secondly, none are out that are actually immune to jailbreaking lol, Where did that claim come from?

        Gpt4 is just an llm. Indeed the better version of gpt3

        Gpt4o and 1o (claude-sonnet possibly also) rely on the generative capacities of the gpt4 model but there is allot more going under the hood that is not simply “generate the next token”

        We all agree that a pure text predictor are not at all intelligent.

        The discussion at hand is wether the current frontier of ai has moved the needle up. And i still would call it pretty dumb, but moving that needle, it did. Somewhere around (x2y0.5) if i have to use the meme. Stating its (0,0) just means people aren’t interested enough to pay attention, that these aren’t just llm anymore. That’s their right but i prefer people stopped joining the discussion so uninformed.