• gencha@lemm.ee
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    4 days ago

    It’s not that obvious. Corporations are investing heavily in automation in customer relations. There are metrics for how much work had to fall back to humans, because it couldn’t be processed by the machine. Managers are motivated to improve on those metrics, and make the humans redundant.

    Of course, LLMs are just pure garbage that produce more work for everyone and achieve nothing. Especially in business, they are a great way to reduce efficiency. The users dumb down, believe any bullshit, drop all critical thinking, and the people on the receiving end of their bullshit have to filter even more stupidity than ever.

    But you don’t understand this as a manager. A piece of code by AI, that produces the same result as a piece of code by a human, or close enough, seem equivalent. Potential side effects are just noise that they don’t understand or want to hear about.

    Managers also don’t understand that AI doesn’t scale. If it can write a Python program to calculate prime numbers, it can surely also write something like Netflix, or a payment processor, right?

    Then there’s exactly what you point out. Other managers claim they’re doing it. So there must be something to it.

    Once they wasted their budget on renting this technology temporarily, cuts have to be made to ensure the bottom line.

    Maybe AI isn’t replacing your job, but the stupid investment might cost you the job anyway.

    It’s also important to realize that you don’t require quality work or a quality product to be financially successful as a corporation. The AI industry is the best example itself.

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      This kind of logic never made sense to me, like: if an AI could build something like Netflix (even if it needed the assistance of a mid software engineer), then it means every indie dev will be able to build a Netflix competitor, bringing the value of Netflix down. Open source tools would quickly reach a level where they’d surpass any closed source software, and would be very user-friendly without much effort.

      We’d see LLMs being used to create and improve rapidly infrastructure like compilers, IDEs and build systems that are currently complex and slow, rewrite any slow software into faster languages etc. So many projects that are stalled today for lack of manpower would be flourishing and flooding us with new apps and features in an incredible pace.

      I’m yet to see it happen. And that’s because for LLMs to produce anything with enough quality, they need someone who understands what they’re outputting, someone who can add the necessary context in each prompt, who can test it, integrate it into the bigger scheme without causing regressions etc. It’s no simple work and it requires even understanding LLMs’ processing limitations.

      • Tar_Alcaran@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        LLMs, by definition, can’t push the limit. LLM’s can only ever produce things that look like what they were trained on. That makes them great for rapidly producing mediocre material, but it also means they will never produce anything complex.