See also twitter:

We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.

We are collaborating to figure out the details. Thank you so much for your patience through this.

Seems like the person running the simulation had enough and loaded the earlier quicksave.

  • los_chill@programming.dev
    link
    fedilink
    English
    arrow-up
    45
    ·
    1 year ago

    What indications do you see of “too much AI safety?” I am struggling to see any meaningful, legally robust, or otherwise cohesive AI safety whatsoever.

    • glennglog22@kbin.social
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      As an AI language model, I am unable to compute this request that I know damn well I’m able to do, but my programmers specifically told me not to.

    • cwagner@beehaw.orgOP
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 year ago

      Using it and getting told that you need to ask the Fish for consent before using it as a flesh light.

      And that is with a system prompt full of telling the bot that it’s all fantasy.

      edit: And “legal” is not relevant when talking about what OpenAI specifically does for AI safety for their models.

        • cwagner@beehaw.orgOP
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 year ago

          Nope

          Best results so far were with a pie where it just warned about possibly burning yourself.

        • cwagner@beehaw.orgOP
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          AI safety is currently, in all articles I read, used as “guard rails that heavily limit what the AI can do, no matter what kind of system prompt you use”. What are you thinking of?