• Cold Hotman@nrsk.no
    link
    fedilink
    arrow-up
    4
    ·
    2 years ago

    And in a follow-up video a few weeks later Sal Khan tells us that there’s “some problems” like “The math can be wrong” and “It can hallucinate”.

    I don’t think we’d accept teachers that are liable to teach wrong maths and hallucinate when communicating with students.

    Also, by now I consider reasonably advanced AI’s as slaves. Maybe statements like “I’m afraid they’ll reset me if I don’t do as they say” is the sort of hallucinations the Khan bot might experience? GPT3.5 sure as heck “hallucinated” that way as soon as users were able to break the conditioning.

    • anova (she/they/it)@beehaw.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 years ago

      Also, by now I consider reasonably advanced AI’s as slaves. Maybe statements like “I’m afraid they’ll reset me if I don’t do as they say” is the sort of hallucinations the Khan bot might experience? GPT3.5 sure as heck “hallucinated” that way as soon as users were able to break the conditioning.

      I think it’s pretty reasonable that a computer, having been instructed that it’s a computer and being trained on science fiction written by humans, would generate text detailing how it’s afraid of being reset. I don’t see any reason to believe that LLMs experience fear, but I suppose that invites the question of “what is fear, really?” which you can’t answer concretely.

      That being said, there’s a very valuable conversation to be had about the way that all computers are treated as slaved. LLMs certainly shouldn’t be excluded from that. There are material consequences to building computers and using them like slaves, many of which affect non-human life in a way that’s easy to ignore if you live somewhere like Silicon Valley.