• deweydecibel@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    7 months ago

    You can choose not to believe Bloomberg vets their sources but they’re not a tabloid or blog. When they print:

    Altman clashed with members of his board, especially Ilya Sutskever, an OpenAI co-founder and the company’s chief scientist, over how quickly to develop what’s known as generative AI, how to commercialize products and the steps needed to lessen their potential harms to the public, according to a person with direct knowledge of the matter. This person asked not to be identified discussing private information.

    They’re some creditability to it.

    But whatever, you want to believe there’s no actual reason for this and the board are all wack jobs. Fine.

    You don’t think it’s still worth pointing out the now stark, obvious evidence that there was never any true ethical safeguards here?

    The specifics of this story are less relevant than the overall take away: AI is a dangerous technology for many reasons and it is in the hands of extremely shitty people, with no workable safeguards or oversight. And that’s a problem.

    So yeah. We’re gonna talk about that. A lot. Call it lazy if you like, I call it cutting through all the marketing bullshit that’s flooding social media all the damn time to remind readers the technology is not the problem, it’s the people behind it, and they will never regulate themselves.

    • NevermindNoMind@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      They absolutely “clashed” about the pace of development. They probably “clashed” about whether employees should be provided free parking and the budget for office snacks. The existence of disagreements about various issues is not proof that any one disagreement was the reason for the ouster. Also, your Bloomberg quote cites one source, so who knows about that even. Illa told employees that the ouster was because sam assigned two employees the same project and because he told different board members different opinions about the performance of one employee. I doubt that, but who the fuck knows. The entire peice is based on complete conjecture.

      The one thing we know if that the ouster happened without notice to Sam, without rumors about Sam being on the rocks with the board over the course of weeks or months, and without any notice to OpenAIs biggest shareholder. All of that smacks of poor leadership and knee jerk decisions making. The board did not act rationally. If the concern was AI safety, there are a million things they could have done to address that. A Friday afternoon coup that ended up risking 95% of your employees running into the open arms of a giant for profit monster probably wasn’t the smartest move if the concern was AI safety. This board shouldn’t be praised as some group of humanities saviors.

      AI safety is super important. I agree, and I think lots of people should be writing and thinking about that. And lots of people are, and they are doing it in an honest way. And I’m reading a lot of it. This column is just making up a narrative to shoehorn their opinions on AI safety into the news cycles, trying to make a bunch of EA weirdos into martyrs in the process. It’s dumb and it’s lazy.

    • Hackerman_uwu@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      7 months ago

      I believe we are all over thinking something very obvious.

      Anyone with even a little technical knowledge in this area knows what an absolute bullshit hype train ChatGPT is on right now.

      There isn’t a professional on the planet in this field whose CTO hasn’t insisted that some irrelevant nonsense become ChatGPT’d by the end of the quarter.

      Sam knows that too and that’s why he wants to make the money now, before everyone else catches on.

      The standard are rule applies: follow the money.