Microsoft’s LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that’s inaccurate or misleading.

[…]

]The relevant passage, which takes effect on November 20, 2024, reads:

Generative AI Features: By using the Services, you may interact with features we offer that automate content generation for you. The content that is generated might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes. Please review and edit such content before sharing with others. Like all content you share on our Services, you are responsible for ensuring it complies with our Professional Community Policies, including not sharing misleading information.

In short, LinkedIn will provide features that can produce automated content, but that content may be inaccurate. Users are expected to review and correct false information before sharing said content, because LinkedIn won’t be held responsible for any consequences.

  • jmcs@discuss.tchncs.de
    link
    fedilink
    arrow-up
    49
    ·
    1 month ago

    A COMPUTER CAN NEVER BE HELD ACCOUNTABLE THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION

    As IBM said in 1979, computers aren’t accountable, and I would go further and say they should never make any meaningful decision. The algorithm used doesn’t really make a difference. The sooner people understand that they are responsible for what they do with computers (like any other tool) the better.

    • ℍ𝕂-𝟞𝟝
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      The real question is, what if you commission a work from another, and they make you something in a completely automated way. Let’s say a vending machine. Are you responsible for what the vending machine does if you use it as it’s supposed to be used? Or is it the owner of the machine?

      Why is it different for LLM text generators?