• Arfman@aussie.zone
    link
    fedilink
    arrow-up
    83
    ·
    6 days ago

    Everytime I read stuff like this, I always remember that slide that says “A computer can never be held accountable therefore a computer must never make a management decision.”

    • LifeInMultipleChoice@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 days ago

      I think we need to make laws pertaining to the use and usage of the term by businesses. There is nothing intelligent about language models. Most of what AI is being used for in businesses is more “Automated Instructions” than anything intelligent.

      Laws need to dictate that companies MUST have reasonable ability to get to a human representative and that they are legally responsible for their responses.

      It’s fine to set up automated systems to assist people within companies, as the majority of issues people have can be solved through automated processes.

      User: “I need access to this network share”

      LLM: Okay submit this form: Link to network share access request form.

      LLM: Can I further assist?

      User submits form specifying what the network path location, radio buttons for read/ read, write permissions, and reason for needing access.

      Form sends approve/deny button to owner of that specific network share in an email.

      Approver clicks approve, and the user is added to the active directory group required, and receives an email back stating they have been added and they should log out and log back in so their active directory groups update group policies.

      Time taken by users: 5 minutes Many companies have so many requests coming in that stuff like this often doesn’t get to the approving parties and completed for weeks.

      But if you set up an LLM inside your company non external facing that locates forms and processes but cannot access user data or permissions it can take the workload of managing 60,000 users down by a significant amount.

      (I’m sure there are a million other uses that could be legitimate, but that’s just a quick one off the top of my head)

  • nroth@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    9
    ·
    5 days ago

    Unpopular opinion: It’s OK to use AI to fight fraud as long as your data is good, your precision threshold is very high, and appeals are easy. It seems like it is almost never used in this way when people try to save money, sadly.

    • kerrigan778@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      5 days ago

      Current AI is incapable of providing that level of good data and high precision, it is uncertain if the types of AI being developed now are even capable of ever achieving that without fundamentally changing how they work.

    • orcrist@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      4 days ago

      Define AI. Then you’ll see that it has been used to fight fraud for decades.

    • DankDingleberry@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      4 days ago

      i work in Management at an insurance firm and thats exactly what we do (use AI for fraud prevention). we have no interest in denying rightful coverage because in the longrun it can cost you more than just paying them outright (lawyer costs, interventions, bad PR, etc…) if you dont work in the industry, you have NO idea how many people try to cheat. its ridiculous.