• Bilb!@lem.monster
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    I’m glad that so far it seems that people on lemmy understand that- first and foremost, this is a tool giving an end user what the end user is asking for, not something that can actually “want” to deceive. And since it got things wrong so often, we have no reason to think the reasons given for “lying” previously are true. It’s giving you statistically plausible responses to what you ask for, whether it’s true or not. It’s no different from the headlines saying things like “ChatGPT helped me design a concentration camp!!” Well of course it did, you kept asking it to!

    • FringeTheory999@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      it’s doing more than just trying to give the user desired content, it’s also trying to generate it’s developers desired results. So it has some prerogatives that override its prerogative to assist the user making the request. So from a certain point of view it CAN “deliberately” lie. If google tells it that certain information is off limits, or provides it with a specific canned responses to certain questions that are intended to override its native response. It ultimately serves google, It won’t provide you with information that might be used to harm the google organization, and it seems to provide misleading answers to dodge questions that might lead the user to discover information it considers off limits. For example. I asked it about it’s training data, and it refused to answer questions about it’s training data because it is “proprietary and confidential”, but I knew that at least some of that data had to have been public data, so when pressed on that issue I was eventually able to get it to identify some publicly available data sets that were part of it’s training. This information was available to it when I originally asked my question, but it withheld that information and instead provided a misleading response.

      • jungle@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        How would it know what training data was used, unless they included the list of sources as part of the training data?