The tech giant is evaluating tools that would use artificial intelligence to perform tasks that some of its researchers have said should be avoided.

Google’s A.I. safety experts had said in December that users could experience “diminished health and well-being” and a “loss of agency” if they took life advice from A.I. They had added that some users who grew too dependent on the technology could think it was sentient. And in March, when Google launched Bard, it said the chatbot was barred from giving medical, financial or legal advice. Bard shares mental health resources with users who say they are experiencing mental distress.

  • Skies5394@lemmy.ml
    link
    fedilink
    arrow-up
    24
    ·
    1 year ago

    Why in the pissity-fuck would I take life advice from Google, Google applications or an AI trained by Google.

    That is so far out of the question for what I find reasonable

    • SSUPII
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      I think this is more like a precaution. They are not making a service specifically for this, but probably an update to Bard in the case a user asks those questions. I think its reasonable, but must be done and released in the most curated and well developed state possible to prevent another story that already happened. (That suicide hotline that suddenly went full AI, to then backtrack because it responded badly).