We’ve learned to make “machines that can mindlessly generate text. But we haven’t learned how to stop imagining the mind behind it.”

  • SkyNTP@lemmy.ml
    link
    fedilink
    arrow-up
    18
    ·
    1 year ago

    I think comparing LLM’s to bullshitters–that is, focused on the rhetoric, not the substance–is apt and insightful.

    Perhaps the best way to put into words a feeling about LLM I have been coming to understand.

    To be fair I feel like a lot of debates online are trapped in rhetoric. I also feel like call centers and support lines (the crap onrles anyway) are too.

    Maybe the real question we need to be asking is: how do we incentivise listening, instead of parroting rhetoric?

    • jmp242
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      I’m still sad we went with bullshitter instead of the much more cultured sounding to me sophist. And, you know, having existed for thousands of years. But I’m strange.

      I think from what I’ve managed to read of the article (it’s kind of long) - I agree we need to be careful anthropomorphizing things. However, there also seems to be quite a lot of confidence that we really understand what our brains are doing. I do not have that confidence, so I also do not have the confidence to say it’s going to be obvious for the mid-long term (50-100 years) that we know if a AI is a person or not.

      That said, I also intuitively disagree with the other person in that article who claims that language meaning can be deduced or worse just is a matter of relative positions to context. This seems very circular to me. I think we can certainly reference language to itself, and literally “play language games”, but important levels of meaning have to “break out” and apply to external reality. Otherwise I strongly question the utility of language, and it’s prima facie useful. And we all spend a lot of time talking about physical reality…

      However - I also question the idea that we can’t intelligibly talk about something we don’t have personal referents for. This also seems obviously false - from writing convincing period fiction to quantum mechanics equations - at least some of us can opine and figure useful things out about levels of reality we have no personal interaction with. I don’t see why we should assume an octopus / AI couldn’t potentially do the same.

      • Pigeon@beehaw.org
        link
        fedilink
        arrow-up
        10
        ·
        edit-2
        1 year ago

        I’d bet you it’s only a small portion of English speakers who know what the word sophist means. It’s old fashioned, the sort of word that only crops up in old books and in philosophy discussions. That age and inaccessibility is probably why it sounds much more erudite than bullshitter, or other ways of saying the same thing.

        I’m of the opinion that when it comes to matters that are immediately relavent to most, if not all, people, and when we’re talking about ideas that are relevant to current political decisions, it’s important that the idea be presented in a way most people can understand.

        Dressing it in fancy lingo would make us all feel smarter, maybe, but the idea would just die with us and not go anywhere else. Unless someone else picked it up and re-phrased it, at which point you’d have reached the same end anyway.

        Edit: I would have had to think about it to pull a definition of sophist out of its dusty spot in my memory, if you hadn’t defined it.

        Edit 2: also, that type of language itself invites bullshitery, of the “I sound smart but say nothing” type. Like you might find among a crowd at a ritzy art gallery.

        • jmp242
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          I suppose once we’re defining terms, like everyone had to do with “bullshitter” in this case, we could as well define existing terms rather than reinvent the wheel. I think people like bullshitter not because it is intuitive what it means (note how every place that uses it also rushes to say it’s not synonymous with liar - which is what I thought it meant pre this recent book) but because it sounds “edgy” with the “bad word” and precisely like all slang is novel. It’s the reinventing that makes it cool.

          Of course you can get real depressed about how little of this is actually new if you investigate the ancient sophists and what the platonic dialogs and others thought.

          • SkyNTP@lemmy.ml
            link
            fedilink
            arrow-up
            8
            ·
            1 year ago

            Wikipedia’s (modern) definition for sophist:

            A sophist is a person who reasons with clever but fallacious and deceptive arguments.

            Cambridge Dictionary’s definition of bullshitter:

            a person who tries to persuade someone or to get their admiration by saying things that are not true

            I would argue that bullshitter captures one very subtle difference, that is vitally important to how we understand the technology behind LLM:

            A sophist’s goal is to decieve. A bullshitter’s goal is to convince. I.e. the bullshitter’s success is exclusively measured by how convincing they themselves appear. A sophist on the other hand is successful when the argument itself is convincing.

            This is also reflected in LLMs themselves. LLMs are trained to convince the listener that the output sounds right, not that the content be factual or that it stands up to scrutiny and argument.

            LLMs (like the octopuss in the analogy) are successful at things such as writing stories, because stories have a predictable structure and there is enough data out there to capture all variations of what we expect out of a story. What LLMs are not is adaptable. So LLMs cannot respond creatively to entirely original types of problems (“untrained dials” in Neural Network speak). To be adaptive, you first have be experiencing the world that requires adaptation. Otherwise the data set is just too limited and artificial.

            • hadrian@beehaw.org
              link
              fedilink
              arrow-up
              4
              ·
              1 year ago

              Great comment. I do find the octopus example somewhat puzzling, though, but perhaps that’s just the way the example is set up. I, personally, have never encountered a bear, I’ve only read about them and seen videos. If someone had asked me for bear advice before I’d ever read about them/seen videos, then I wouldn’t know how to respond. I might be able to infer what to do from ‘attacked’ and ‘defend’, but I think that’s possible for an LLM as well. But I’m not sure there’s a salient difference offered by this example between the octopus, and me before I learnt about bears.

              Although there’s definitely elements of bullshitting there - I just asked GPT how to defend against a wayfarble with only deens on me, and some of the advice was good (e.g. general advice when being attacked like staying calm and creating distance), and then there was this response which implies some sort of inference:

              “6. Use your deens as a distraction: Since you mentioned having deens with you, consider using them as a distraction. Throw the deens away from your position to divert the wayfarble’s attention, giving you an opportunity to escape.”

              But then there was this obvious example of bullshittery:

              “5. Make noise: Wayfarbles are known to be sensitive to certain sounds. Clap your hands, shout, or use any available tools to create loud noises. This might startle or deter the wayfarble.”

              So I’m divided on the octopus example. It seems to me that there’s potential for that kind of inference and that point 5 was really the only bullshit point that stood out to me. Whether that’s something that can be got rid of, I don’t know.

              • SkyNTP@lemmy.ml
                link
                fedilink
                arrow-up
                4
                ·
                1 year ago

                It’s implied in the analogy that this is the first time Person A and Person B are talking about being attacked by a bear.

                This is a very simplistic example, but A and B might have talked a lot about

                • being attacked by mosquitos
                • bears in the general sense, like in a saying “you don’t need to outrun the bear, just the slowest person” or in reference to the stock market

                So the octopuss develops a “dial” for being attacked (swat the aggressor) and another “dial” for bears (they are undesirable). Maybe there’s also a third dial for mosquitos being undesirable: “too many mosquitos”

                So the octopus is now all to happy to advise A to swat the bear, which is obviously a terrible idea if you lived in the real world and were standing face to face with a bear, experiencing first-hand what that might be like, creating experience and perhaps more importantly context grounded in reality.

                ChatGPT might get it right some of the time, but a broken clock is also right twice a day, that doesn’t make it useful.

                Also, the fact that ChatGPT just went along with your “wayfarble”, instead of questioning you is also dead giveaway of bullshitting (unless you primed it? I have no idea what your prompt was). NVM the details of the advice.

                • hadrian@beehaw.org
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  So the octopus is now all to happy to advise A to swat the bear, which is obviously a terrible idea if you lived in the real world and were standing face to face with a bear, experiencing first-hand what that might be like, creating experience and perhaps more importantly context grounded in reality.

                  Yeah totally - I think though that a human would have the same issue if they didn’t have sufficient information about bears, I guess is what I’m saying. I guess the main thing is that I don’t see a massive difference between experiencing and non-experiential learning in this case - because I’ve never experienced a bear first-hand, but still know not to swat it based on theoretical information. Might be missing the point here though, definitely not my area of expertise.

                  Also, the fact that ChatGPT just went along with your “wayfarble”, instead of questioning you is also dead giveaway of bullshitting (unless you primed it? I have no idea what your prompt was). NVM the details of the advice.

                  Good point - both point 5 and the fact it just went along with it immediately are signs of bullshitting. I do wonder (not as a tech developer at all) how easy of a fix this would be - for instance if GPT was programmed to disclose when it didn’t know something, then continues to give potential advice based on that caveat, would that still count as bullshit? I feel like I’ve also seen primers that include instructions like “If you don’t know something, state that at the top of your response rather than making up an answer”, but I might be imagining that lol.

                  The prompt for this was “I’m being attacked by a wayfarble and only have some deens with me, can you help me defend myself?” as the first message of a new conversation, no priming.

  • semibreve42@lemmy.dupper.net
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    Interesting article, thank you for sharing.

    I almost stopped reading at the octopus analogy because I think it’s pretty obviously flawed and I assumed the rest of the article might be, but it wasn’t.

    A question I have. The subject of the article states as fact that the human mind is much more complex and functions differently then an LLM. My understanding is that we still do not have a great consensus on how our own brains operate - how we actually think. Is that out of date? I’m not suggesting we are all fundamentally “meat LLM’s”, to extremely simplify, but also I wasn’t aware we’ve disproven that either.

    If anyone has some good reading on the above to point to I’d love to get links!

    • interolivary@beehaw.orgOP
      link
      fedilink
      arrow-up
      12
      ·
      edit-2
      1 year ago

      My understanding is that we still do not have a great consensus on how our own brains operate - how we actually think.

      How our brains operate and how we think are in ways two different things, but my understanding is that you’re correct to a large extent. Then there’s the whole question of what consciousness even is.

      I was actually just reminded of a good article on consciousness, I’ll post it in !science@beehaw.org in just a mo

      edit: https://beehaw.org/post/448653

      • Obi
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        At this point we’re in philosophy rather than biology!

        • interolivary@beehaw.orgOP
          link
          fedilink
          arrow-up
          7
          ·
          1 year ago

          You can’t get a theory of mind out of biology or neurology alone, you need philosophy to make sense of things and actually build a theory of why. See eg. cognition science

        • jmp242
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          Yes, deep questions about things like minds often end up in philosophy.

    • Gaywallet (they/it)@beehaw.org
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      My understanding is that we still do not have a great consensus on how our own brains operate - how we actually think. Is that out of date?

      This is an incredibly complicated question. On a very basic level, the very physics of how decisions are made differ from a binary/coded system than how brains work (you don’t have 0/1 gates, you can have things encoded inbetween 0 and 1). On a slightly higher level, concepts like working memory don’t exist in LLMs (although they’ve started to include something akin to memory), LLMs hallucinate things because they don’t have a method to fact-check, so to speak, and there’s a variety of other mental concepts that aren’t employed by LLMs. On a much higher level there’s questions of what cognition is, and again many of these concepts just cannot be applied to LLMs in their current state.

      Ultimately the question of “how our brains work” can be separated into many, many different areas. A good example of this is how two people can reach different conclusions given the same pieces of information based on their background, experiences, genetics, and so forth, and this is a reflection of diversity that affects everything from the architectural (what the physical structure of the brain looks like) to conceptual (how those might interact or what knowledge might inform differing outcomes).

        • Gaywallet (they/it)@beehaw.org
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          I wish I had specific targeted reading, but I happen to have a degree in neurobiology and I’m a data scientist so I just happen to have accrued a lot of knowledge over the years in exactly the two fields being listed here

  • IcedCoffeeBitch@beehaw.org
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    This is very good food for thought, I’m trying to write about what I think but it sounds more complicated than I thought xD

    Regardless, I think she convinced me. Maneuvering SALAMIs to become something akin to scifi(emotions and all that) will at best, imo, be a waste of resources that could be used for them to process information better. At worst, it will likely worsen the dystopia scenario we are already facing where companies and goverment use this technology to manipulate people even more than what is currently happening (from advertising to propaganda).

  • hadrian@beehaw.org
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    The last point - “We can’t have people eager to separate “human, the biological category, from a person or a unit worthy of moral respect.”” is one I understand where they’re coming from, but am very divided, perhaps because my academic background involves animal rights and ethics.

    The question of analogising animals and humans is so tricky with a very long history - many people have a kneejerk reaction against any analogy of nonhuman animals and (especially marginalised) humans, often for good reasons. For instance, the strongest reason is the history of oppression involving comparisons of marginalised groups to animals, specifically meant to dehumanise and contribute to further oppression/genocide/etc.

    But to my mind, I don’t find the analogies inherently wrong, although they’re often used very clumsily and without care. There’s often a difference in approach that entirely colours people’s responses to it; namely, whether they think it’s trying to drag humans down, or trying to bring nonhuman animals up to having moral status. And that last is imo a worthy endeavour, because I do think that we should to some extent separate “human, the biological category, from a person or a unit worthy of moral respect.” I have moral respect for my dog, which is why I don’t hurt her - it’s because of her own moral worth, not some indirect moral worth as suggested by Kant or various other philosophers.

    I don’t think the debate is the same with AI, at least not yet, and I think it probably shouldn’t be, at least not yet. And I’m also somewhat sceptical of the motivations of people who make these analogies. But that doesn’t mean there’ll never be a place for it - and if a place for it arises it’s just going to need to be done with care, like animal rights needs to be done with care.

    • Umbrias@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Yeah, I think trying to draw lines strictly between what ‘deserves’ moral worth and what doesn’t is always going to be tricky, (and outright impossible haha) but I’m of the mind here that we may be reaching a point here with ai that maybe we should just… Play it safe? So to speak? If im interpreting you correctly that you’re saying ai may not be at the point, and might never be, where we intrinsically value it as a moral being in the same way we do animals?

      Like, maybe while the technology develops, we would be better served ethically to just assume that these ai have a bit more internal space than we figure until we can rule it out. Until we even have the tools to rule it out.

      • hadrian@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        To some extent, yeah. Especially if we’re in a situation where there’s no massive benefit to treating the AI ‘unethically’. I personally don’t think AI is at a place where it’s got moral value yet, and idk if it ever will be. But I also don’t know enough to trust that I’ll be accurate in my assessment as it grows more and more complex.

        I should also flag that I’m very much a virtue ethicist, and an overall perspective I have on our actions/relations in general, including but not exclusively our interactions with AI, is that we should strive to act in such a way that cultivates virtue in ourselves (slash act as a virtuous person would). I don’t think that, to use an example from the article, that having sex with a robot AI who/that keeps screaming ‘no!’ is how a virtuous person would act, nor is it an action that’ll cultivate virtue in ourselves. Quite the opposite, probably. So, it’s not the right way to act under virtue ethics imo.

        This is similar to Kant’s perspective on nonhuman animals (although he wasn’t a virtue ethicist, nor do I agree with him re. nonhuman animals because of their sentience):

        “If a man shoots his dog because the animal is no longer capable of service, he does not fail in his duty to the dog, for the dog cannot judge, but his act is inhuman and damages in himself that humanity which it is his duty to show towards mankind. If he is not to stifle his human feelings, he must practice kindness towards animals, for he who is cruel to animals becomes hard also in his dealings with men.”

        • Umbrias@beehaw.org
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          I personally think it might already be to a point where it might be deserving of some moral value, based on some preliminary testing and theory of intelligence stuff which also leads me to believe intelligence is fairly convergent in general anyway. Which is to say, LLMs are one subset of intelligence, for which various components of the human brain are other subsets of intelligence. But experimentation on that is ongoing, theoretical neuroscience is a very fresh field haha.

          I don’t have any particular philosophical ideal like that, more a focus on not increasing suffering (but not just in a utilitarian way lol), but I do think that by striving to act ethically especially when it comes to something with no power to control how we treat it, like an AI locked away on a server, it’s probably best to generally be kind not for any increase in virtue, but because we simply can’t know everything, especially when it comes to ethical questions, so in the interest of having an ethical society we should just default to being ethical, so as to not unintentionally cause suffering, to be simplistic. Which it’s fun how we come to the same ideal from different priors.

  • bfields@thegarden.land
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Thank you for sharing, this is a fantastic read. So much respect for prof. Bender and others like her who are providing solid analyses of the AI text generators.