• Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    5
    ·
    15 hours ago

    Alignment is short for goal alignment. Some would argue that alignment suggests a need for intelligence or awareness and so LLMs can’t have this problem, but a simple program that seems to be doing what you want it to do as it runs but then does something totally different in the end is also misaligned. Such a program is also much easier to test and debug than AI neural nets.

    • eleitl@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      10 hours ago

      Aligned with who’s goals exactly? Yours? Mine? At which time? What about future superintelligent me?

      How do you measure alignment? How do you prove conservation of this property along open ended evolution of a system embedded into above context? How do you make it a constructive proof?

      You see, unless you can answer above questions meaningfully you’re engaging in a cargo cult activity.