cross-posted from: https://lemmy.ml/post/2811405

"We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year America’s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 months ago

    Yes yes yes I get that, but it cannot create some brand new concept. It can make an amalgamation of things it can see, it can predict things, but only on ideas that have happened before, because something somewhere along the line it was trained on. I know what you’re saying, but it doesn’t have creative spark, it doesn’t have imagination, it doesn’t have life… yet.

    It does a great job at creating derivative work, you can even ask it to create a new style - but that style will by definition be somehow derived from something it was trained on. Until it can think and beyond that - have imagination, then it’s limited. In short, we need Data, not just data.

    • jay@beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      It’s good at automating basic things, it can really help be a tool but it’s extremely lacking and while it will lead us to new places, I think it will go hand in hand with how we regulate and evolve alongside it.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        Okay you’re really just trying to pick an argument and I’m not going down that path. Everyone who works on llms knows the limitations. Llms can’t think, they can’t create, only give a probability on how close things are to what’s requested. I know what you’re trying to say. It’s not accurate. Humans can truly think, they have consciousness, they learn. At this point llms cannot truly learn. This is all I’m going to say about it.

        • SinAdjetivos@beehaw.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 months ago

          The academic name for the field is quite literally “machine learning”.

          You are incorrect that these systems are unable to create/be creative, you are correct that creativity != consciousness (which is an extremely poorly defined concept to begin with …) and you are partially correct about how the underlying statistical models work. What you’re missing is that by defining a probabilistic model to objects you can “think”/“be creative” because these models dont need to see a “blue hexagonal strawberry” in order to think about what that may mean and imagine what it looks like.

          I would recommend this paper for further reading into the topic and would like to point out you are again correct that existing AI systems are far from human levels on the proposed challenges, but inarguably able to “think”, “learn” and “creatively” solve those proposed problems.

          The person you’re responding to isn’t trying to pick a fight they’re trying to help show you that you have bought whole cloth into a logical fallacy and are being extremely defensive about it to your own detriment.

          That’s nothing to be embarrassed about, the “LLMs can’t be creative because nothing is original, so everything is a derivative work” is a dedicated propaganda effort to further expand copyright and capital consolidation.

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          11 months ago

          I know what you’re trying to say

          Then that puts me at a disadvantage because I don’t know what you’re trying to say.