• CeeBee@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      8
      ·
      edit-2
      9 months ago

      LLMs as AI is just a marketing term. there’s nothing “intelligent” about “AI”

      Yes there is. You just mean it doesn’t have “high” intelligence. Or maybe you mean to say that there’s nothing sentient or sapient about LLMs.

      Some aspects of intelligence are:

      • Planning
      • Creativity
      • Use of tools
      • Problem solving
      • Pattern recognition
      • Analysts

      LLMs definitely hit basically all of these points.

      Most people have been told that LLMs “simply” provide a result by predicting the next word that’s most likely to come next, but this is a completely reductionist explaining and isn’t the whole picture.

      Edit: yes I did leave out things like “understanding”, "abstract thinking ", and “innovation”.

      • SkybreakerEngineer@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        7
        ·
        9 months ago

        Other than maybe pattern recognition, they literally have no mechanism to do any of those things. People say that it recursively spits out the next word, because that is literally how it works on a coding level. It’s called an LLM for a reason.

        • CeeBee@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          edit-2
          9 months ago

          they literally have no mechanism to do any of those things.

          What mechanism does it have for pattern recognition?

          that is literally how it works on a coding level.

          Neural networks aren’t “coded”.

          It’s called an LLM for a reason.

          That doesn’t mean what you think it does. Another word for language is communication. So you could just as easily call it a Large Communication Model.

          Neural networks have hundreds of thousands (at the minimum) of interconnected layers neurons. Llama-2 has 76 billion parameters. The newly released Grok has over 300 billion. And though we don’t have official numbers, ChatGPT 4 is said to be close to a trillion.

          The interesting thing is that when you have neural networks of such a size and you feed large amounts of data into it, emergent properties start to show up. More than just “predicting the next word”, it starts to develop a relational understanding of certain words that you wouldn’t expect. It’s been shown that LLMs understand things like Miami and Houston are closer together than New York and Paris.

          Those kinds of things aren’t programmed, they are emergent from the dataset.

          As for things like creativity, they are absolutely creative. I have asked seemingly impossible questions (like a Harlequin story about the Terminator and Rambo) and the stuff it came up with was actually astounding.

          They regularly use tools. Lang Chain is a thing. There’s a new LLM called Devin that can program, look up docs online, and use a command line terminal. That’s using a tool.

          That also ties in with problem solving. Problem solving is actually one of the benchmarks that researchers use to evaluate LLMs. So they do problem solving.

          To problem solve requires the ability to do analysis. So that check mark is ticked off too.

          Just about anything that’s a neutral network can be called an AI, because the total is usually greater than the sum of its parts.

          Edit: I wrote interconnected layers when I meant neurons

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        17
        arrow-down
        2
        ·
        9 months ago

        It’s some weird semantic nitpickery that suddenly became popular for reasons that baffle me. “AI” has been used in videogames for decades and nobody has come out of the woodwork to “um, actually” it until now. I get that people are frightened of AI and would like to minimize it but this is a strange way to do it.

        At least “stochastic parrot” sounded kind of amusing.

        • XTL
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          9 months ago

          Um, actually clueless people have made “that’s not real AI” and “but computers will never …” complaints about AI as long as it has existed as a computing science topic. (50 years?)

          Chatbots and image generators being in the headlines has made a new loud wave of complainers, but they’ve always been around.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            9 months ago

            It’s exactly that “new loud wave of complainers” I’m talking about.

            I’ve been in computing and specifically game programming for a long time now, almost two decades, and I can’t recall ever having someone barge in on a discussion of game AI with “that’s not actually AI because it’s not as smart as a human!” If someone privately thought that they at least had the sense not to disrupt a conversation with an irrelevant semantic nitpick that wasn’t going to contribute anything.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      9 months ago

      The term “artificial intelligence” was established in 1956 and applies to a broad range of algorithms. You may be thinking of Artificial General Intelligence, AGI, which is the more specific “thinks like we do” sort that you see in science fiction a lot. Nobody is marketing LLMs as AGI.

  • prime_number_314159@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    3
    ·
    9 months ago

    What we have done is invented massive, automatic, no holds barred pattern recognition machines. LLMs use detected patterns in text to respond to questions. Image recognition is pattern recognition, with some of those patterns named things (like “cat”, or “book”). Image generation is a little different, but basically just flips the image recognition on its head, and edits images to look more like the patterns that it was taught to recognize.

    This can all do some cool stuff. There are some very helpful outcomes. It’s also (automatically, ruthlessly, and unknowingly) internalizing biases, preferences, attitudes and behaviors from the billion plus humans on the internet, and perpetuating them in all sorts of ways, some of which we don’t even know to look for.

    This makes its potential applications in medicine rather terrifying. Do thousands of doctors all think women are lying about their symptoms? Well, now your AI does too. Do thousands of doctors suggest more expensive treatments for some groups, and less expensive for others? AI can find that pattern.

    This is also true in law (I know there’s supposed to be no systemic bias in our court systems, but AI can find those patterns, too), engineering (any guesses how human engineers change their safety practices based on the area a bridge or dam will be installed in? AI will find out for us), etc, etc.

    The thing that makes AI bad for some use cases is that it never knows which patterns it is supposed to find, and which ones it isn’t supposed to find. Until we have better tools to tell it not to notice some of these things, and to scrub away a lot of the randomness that’s left behind inside popular models, there’s severe constraints on what it should be doing.

  • Potatos_are_not_friends@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    9 months ago

    Not trying to be a gatekeeper, but is this blog even worth sharing?

    My name’s Ed, I’m the CEO of national Media Relations and Public Relations company EZPR, of which I am both the E (Ed) and the Z (Zitron).

  • potatopotato@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    5
    ·
    9 months ago

    Please god I hope so. I don’t see a path to anything significantly more powerful than current models in this paradigm. ANNs like these have existed forever and have always behaved the way current LLMs do, they just recently were able to make them run somewhat more efficiently with bigger context windows and training sets which birthed GPT3 which was further minimally tweaked into 3.5 and 4 among others. This feels a whole lot like a local maxima where anything better will have to go back down through another several development cycles before it surpasses the current gen.

    • PersonalDevKit@aussie.zone
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      9 months ago

      I think GPT5 will be eye opening. If it is another big leap ahead then we are not in this local maxima, if it is a minor improvement then we may be in a local maxima.

      Likely then the focus will shift to reducing hardward requirements for inforarance(?) Allowing bigger models to run better on smaller hardware

      • agamemnonymous@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        9 months ago

        It takes what, a year minimum to design a chip? I think the iterative hardware-software cycle is just now properly getting a foothold on the architecture. The next few years are going to, at minimum, explore what lavishly-funded, purpose-built hardware can do for the field.

        It’ll be years before we reach any kind of maximum. Even if the software doesn’t improve at all, which is unlikely, better utilization alone will make significant improvements on performance.

      • foggy@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        9 months ago

        It feels a whole lot more like there are big things due with GPT5 and beyond.

        Like, to be a successful actor in 2020 was to act, and stand in front of expensive equipment operated by specialized operators, with directors, makeup, catering… The production itself was/is its own production.

        I predict that to be a successful actor in 2030, all you’ll need is a small amount of money to utilize some powerful processors over the internet, enter in a few photos of your face, give it 10 different ideas for a movie, until it to make some 2 hour films where you are the star. Then you’ll take one of them that you kind of like, throw some prompts at it and end up with a nearly finished Hollywood quality film.

        To be a successful musician in 1960, you needed to get a record deal, you needed to go to a recording studio. Now we’ve got Jacob Collier winning Grammys, recording everything in his bedroom. I think we’re going to see that kind of history repeat itself on steroids. Not just for art, though. For anything.

        With the rapid advancements were seeing in robotics right now, I can’t imagine a single thing that people do that won’t be done better by autonomous agents, both programmatic and robotic, in the next 5 years or so.

        • Admiral Patrick@dubvee.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 months ago

          I predict that to be a successful actor in 2030, all you’ll need is a small amount of money to utilize some powerful processors over the internet, enter in a few photos of your face, give it 10 different ideas for a movie, until it to make some 2 hour films where you are the star. Then you’ll take one of them that you kind of like, throw some prompts at it and end up with a nearly finished Hollywood quality film.

          Good god I hope not.

  • magic_lobster_party@kbin.run
    link
    fedilink
    arrow-up
    7
    ·
    9 months ago

    There are a few reasons why the AI hype has diminished. One reason is data integrity concerns - many companies prohibit the use of ChatGPT out of fear of OpenAI training their models on confidential data.

    To combat this the option is to provide LLMs that can be run “on premise”. Currently those LLMs aren’t good enough for most uses. Hopefully we will get there in time, but at this pace it seems like it’s taking longer than expected.

  • Kata1yst@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    9 months ago

    Author doesn’t seem to understand that executives everywhere are full of bullshit and marketing and journalism everywhere is perversely incentivized to inflate claims.

    But that doesn’t mean the technology behind that executive, marketing, and journalism isn’t game changing.

    Full disclosure, I’m both well informed and undoubtedly biased as someone in the industry, but I’ll share my perspective. Also, I’ll use AI here the way the author does, to represent the cutting edge of Machine Learning, Generative Self-Reenforcement Learning Algorithms, and Large Language Models. Yes, AI is a marketing catch-all. But most people better understand what “AI” means, so I’ll use it.

    AI is capable of revolutionizing important niches in nearly every industry. This isn’t really in question. There have been dozens of scientific papers and case studies proving this in healthcare, fraud prevention, physics, mathematics, and many many more.

    The problem right now is one of transparency, maturity, and economics.

    The biggest companies are either notoriously tight-lipped about anything they think might give them a market advantage, or notoriously slow to adopt new technologies. We know AI has been deeply integrated in the Google Search stack and in other core lines of business, for example. But with pressure to resell this AI investment to their customers via the Gemini offering, we’re very unlikely to see them publicly examine ROI anytime soon. The same story is playing out at nearly every company with the technical chops and cash to invest.

    As far as maturity, AI is growing by astronomical leaps each year, as mathematicians and computer scientists discover better ways to do even the simplest steps in an AI. Hell, the groundbreaking papers that are literally the cornerstone of every single commercial AI right now are “Attention is All You Need” (2017) and
    “Retrieval-Augmented Generation for Knowledge -Intensive NLP Tasks” (2020). Moving from a scientific paper to production generally takes more than a decade in most industries. The fact that we’re publishing new techniques today and pushing to prod a scant few months later should give you an idea of the breakneck speed the industry is going at right now.

    And finally, economically, building, training, and running a new AI oriented towards either specific or general tasks is horrendously expensive. One of the biggest breakthroughs we’ve had with AI is realizing the accuracy plateau we hit in the early 2000s was largely limited by data scale and quality. Fixing these issues at a scale large enough to make a useful model uses insane amounts of hardware and energy, and if you find a better way to do things next week, you have to start all over. Further, you need specialized programmers, mathematicians, and operations folks to build and run the code.
    Long story short, start-ups are struggling to come to market with AI outside of basic applications, and of course cut-throat silicon valley does it’s thing and most of these companies are either priced out, acquired, or otherwise forced out of business before bringing something to the general market.

    Call the tech industry out for the slime is generally is, but the AI technology itself is extremely promising.