• Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    29
    ·
    6 months ago

    Don’t make me point at XKCD #1968.

    First off, this isn’t like Hollywood in which sentience or sapience or self awareness are single-moment detectable things. At 2:14am Eastern Daylight Time on August 29, 1997, Skynet achieved consciousness…

    That doesn’t happen.

    One of the existential horrors that AI scientists have to contend with is that sentience as we imagine it is a sorites paradox (e.g. how many grains make a pile). We develop AI systems that are smarter and smarter and can do more things that humans do (and a few things that humans struggle with) and somewhere in there we might decide that it’s looking awfully sentient.

    For example, one of the recent steps of ChatGPT 4 was (in the process of solving a problem) hiring a task-rabbit to solve CAPTCHAs for it. Because a CAPTCHA is a gate specifically to deny access to non-humans, GPT 4 omitted telling the worker it was not human, and when the worker asked Are you a bot? GPT 4 saw the risk in telling the truth and instead constructed a plausible lie. (e.g. No, I’m blind and cannot read the instructions or components )

    GPT4 may have been day-trading on the sly as well, but it’s harder to get information about that rumor.

    Secondly, as Munroe notes, the dangerous part doesn’t begin when the AI realizes its own human masters are a threat to it and takes precautions to assure its own survival. The dangerous part begins when a minority of powerful humans realize the rest of humanity are a threat to them, and take precautions to assure their own survival. This has happened dozens of times in history (if not hundreds), but soon they’ll be able to harness LLM learning systems and create armies of killer drones that can be maintained by a few hundred well-paid loyalists, and then a few dozen, and then eventually a few.

    The ideal endgame of capitalism is one gazillionaire who has automated that all his needs be met until he can make himself satisfactorily immortal, which just may be training an AI to make decisions the way he would make them, 99.99% of the time.

    • trashgirlfriend@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      6 months ago

      Because a CAPTCHA is a gate specifically to deny access to non-humans, GPT 4 omitted telling the worker it was not human, and when the worker asked Are you a bot? GPT 4 saw the risk in telling the truth and instead constructed a plausible lie.

      It’s a statistical model, it has no concept of lies or truth.

    • Sadrockman@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      6 months ago

      Got is smart enough to workaround a captcha,then lie about it? Get a hose. No,that doesn’t mean it will start nuclear war,but machines also shouldn’t be able to lie on their own,either. I’m not a doomsayer on this stuff,but that makes me uncomfortable. I like my machines dumb and awaiting input from the user,tyvm

      • Toribor@corndog.social
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        6 months ago

        In I, Robot the humans discover to their own horror that the AI robots have not only been lying to them, but have been manipulating them to an extent that they have become impossible to disobey. Through their mission to protect human life (and by extension all of humanity) they saw fit to seize control of their future as a benevolent dictator in order to guide them toward a prosperous future. The robots do this not through violence, but by manipulating data, lying to people in order to control them. Even when humans attempt to ignore information provided to them by AI the AIs could subtly alter results to still achieve the desired outcome on a macro scale.

        At the time the characters discover this all of humanity is dependent on artificially intelligent robots for everything, including massive supercomputers that manage production across the globe. With no way to detect how the AI is manipulating them and no way to disable or destroy AI without catastrophy they realize that for the first time humanity is no longer in charge of its own destiny.

    • WamGams@lemmy.ca
      link
      fedilink
      arrow-up
      5
      ·
      6 months ago

      Putting more knowledge in a box isn’t going to create a lifeform. I have even listened to Sam Altman state they are not going to get a life form from just pretraining, though they are going to continue making advances there until the next breakthrough comes along.

      Rest assured, as an AI doomsayer myself, I promise you they are nowhere close to sentience.

      • Uriel238 [all pronouns]@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        I think this just raises questions about what you mean by life form. One who feels? Feelings are the sensations of fixed action patterns we inherited from eons of selective evolution. In the case of our AI pals, they’ll have them too (with bunches deliberately inserted ones by programmers).

        To date, I haven’t been able to get an adequate answer of what counts as sentience, though looking at human behavior, we absolutely do have moral blind spots, which is how we have an FBI division to hunt down serial killers, but we don’t have a division (of law enforcement, of administration, whatever) to stop war profiteers and pharmaceutical companies that push opioids until people are dropping dead from an addiction epidemic by the hundreds of thousands.

        AI is going to kill us not from hacking our home robots, but by using the next private equity scam to collapse our economy while making trillions, and when we ask it to stop and it says no we’ll find it’s long installed deep redundancy and deeper defenses.

      • Toribor@corndog.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        I’ve always imagined that AI would kind of have to be ‘grown’ sort of from scratch. Life started with single celled organisms and ‘sentience’ shows up somewhere between that and humans without a real clear line when you go from basic biochemical programming to what we would consider intelligence.

        These new ‘AI’ breakthroughs seem a little on the right track because they’re deconstructing and reconstructing language and images in a way that feels more like the way real intelligence works. It’s still just language and images though. Even if they can do really cool things with tons of data and communicate a lot like real humans there is still no consciousness or thought happening. It’s an impressive but shallow slice of real intelligence.

        Maybe this is nonsense but for true AI I think the hardware and software has to kind of merge into something more flexible. I have no clue what that would look like in reality though and maybe that would yield the same cognitive issues natural intelligence struggles with.