See also twitter:

We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.

We are collaborating to figure out the details. Thank you so much for your patience through this.

Seems like the person running the simulation had enough and loaded the earlier quicksave.

    • deadcream@kbin.social
      link
      fedilink
      arrow-up
      14
      ·
      1 year ago

      Does it really matter? It’s the usual corporate intrigues/power struggle/backstabbing/whatever. Just for some reason leaked into public view instead of being behind the scenes like it’s normally done, probably because someone is stupid.

    • averyminya@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      These article titles need different headlines and they need to date them. We’ve seen this same headline 3 or 4 times now within the last week and yet nobody knows which point is what unless we cross-reference the dates in the articles. Which coincidentally are always in ^^small text hidden by the title^^ and could simply be solved by having a date in the title.

    • cwagner@beehaw.orgOP
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Eh, not sure I agree. Seems to also have been between too little and too much AI safety, and I strongly feel like there’s already too much AI safety.

      • los_chill@programming.dev
        link
        fedilink
        English
        arrow-up
        45
        ·
        1 year ago

        What indications do you see of “too much AI safety?” I am struggling to see any meaningful, legally robust, or otherwise cohesive AI safety whatsoever.

        • glennglog22@kbin.social
          link
          fedilink
          arrow-up
          7
          ·
          1 year ago

          As an AI language model, I am unable to compute this request that I know damn well I’m able to do, but my programmers specifically told me not to.

        • cwagner@beehaw.orgOP
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          1 year ago

          Using it and getting told that you need to ask the Fish for consent before using it as a flesh light.

          And that is with a system prompt full of telling the bot that it’s all fantasy.

          edit: And “legal” is not relevant when talking about what OpenAI specifically does for AI safety for their models.

            • cwagner@beehaw.orgOP
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              1 year ago

              Nope

              Best results so far were with a pie where it just warned about possibly burning yourself.

            • cwagner@beehaw.orgOP
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              AI safety is currently, in all articles I read, used as “guard rails that heavily limit what the AI can do, no matter what kind of system prompt you use”. What are you thinking of?

  • neuracnu@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 year ago

    This article does not make clear whether or not the new board will remain committed to its non-profit position.

    I presume that’s what this whole sordid affair is all about, but no one is saying it.

    • chameleon@kbin.social
      link
      fedilink
      arrow-up
      11
      ·
      1 year ago

      I think most people don’t realize how unusual their company structure is. It feels like it’s set up to let them do exactly that. As far as I can tell, once you look past the smoke and mirrors, the board effectively controls both the non-profit and the for-profit.

      • anachronist@midwest.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        I think the outcome of the last few days is that the nonprofit board controls nothing and serves at the pleasure of the for-profit company’s investors.

    • abhibeckert@beehaw.org
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      1 year ago

      It’s a non-profit. There are no investors.

      Microsoft gave them some money in return for IP rights… and they will potentially one day get their money back (and more) if OpenAI is ever able to pay them, but they’re not real investors. The amount of money Microsoft might get back is limited.

      • Kichae@lemmy.ca
        link
        fedilink
        arrow-up
        10
        ·
        1 year ago

        It’s a non-profit. There are no investors.

        Hah.

        OpanAI, Inc. is non-profit. OpenAI Global is a for-profit entity, and has been for years now. They’re trying to have their cake and eat it, too.

        • sanzky@beehaw.org
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          but the non profit controls the for profit. that is not even that unusual. Mozilla works the same way

    • randomsnark@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Do you have any additional info about the changes they’re making to the mission? I didn’t see that in the article

      • abhibeckert@beehaw.org
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        1 year ago

        There’s been no talk of anything changing. Just different people in charge of deciding how to get to the goal which is to create safe state of the art AI tech that will benefit all of humanity.

        It could take centuries to get there and cost trillions of dollars, figuring out how to raise that money is where things get controversial.

        • bedrooms@kbin.social
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          Whether OpenAI will be able to resist all the meddling from politics and greedy businesses till they satisfy those goals is also a huge question.

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            1 year ago

            No need. Politics, businesses, war planners, don’t need OpenAI, they can build (have been building) their own AIs to follow their own goals. Now that OpenAI has shown how far one can get, the genie is out of the bottle. In a sense, OpenAI has already failed its goal.

            • bedrooms@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Actually, everybody is trying and failing to reach the quality of ChatGPT so far because OpenAI doesn’t release the details. Add to that websites like Reddit and Xwitter, the source of the training data for AIs, have started charging money for that. The governments are also starting to obstruct AI advancement.

    • EeeDawg101@lemm.ee
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      I believe they did but were of the understanding he’d go back to OpenAI if the board changed their mind (like what happened). It was basically his golden parachute.

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      So what, can’t he be a CEO hired by Microsoft?.. I dunno, this looks like some 5D chess.

      • abhibeckert@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Sure, that’s possible.

        But Microsoft never actually signed an employment contract with Sam and it doesn’t look like they ever will. Just because someone says they plan to do something doesn’t mean it will happen.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    🤖 I’m a bot that provides automatic summaries for articles:

    Click here to see the summary

    Sam Altman will return as CEO of OpenAI, overcoming an attempted boardroom coup that sent the company into chaos over the past several days.

    The company said in a statement late Tuesday that it has an “agreement in principle” for Altman to return alongside a new board composed of Bret Taylor, Larry Summers, and Adam D’Angelo.

    When asked what “in principle” means, an OpenAI spokesperson said the company had “no additional comments at this time.”

    OpenAI’s nonprofit board seemed resolute in its initial decision to remove Altman, shuffling through two CEOs in three days to avoid reinstating him.

    Meanwhile, the employees of OpenAI revolted, threatening to defect to Microsoft with Altman and co-founder Greg Brockman if the board didn’t resign.

    During the whole saga, the board members who opposed Altman withheld an actual explanation for why they fired him, even under the threat of lawsuits from investors.


    Saved 59% of original text.

  • Shyfer@ttrpg.network
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Anyone know why they wouldn’t say why they fired him? An explanation would have really cleared a lot up.

    • abhibeckert@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I don’t think anyone knows. I’m assuming they didn’t have a good reason and are embarrassed to admit that.

  • sub_o@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I mistook Larry Summers as Larry Elison (ex Oracle) previously and made a comment that it gone from bad to worse.

    I’m retracting it, I don’t know much about Larry Summers.