Der ChatGPT-Entwickler OpenAI hat sich von seinem Geschäftsführer Sam Altman getrennt. Er sei nicht immer ehrlich gewesen, begründete das KI-Unternehmen die Entscheidung.

Das US-Unternehmen OpenAI hat seinen Geschäftsführer Sam Altman mit sofortiger Wirkung entlassen. Die bisherige Chief Technology Officer des Unternehmens, Mira Murati, sei zur vorübergehenden Geschäftsführerin ernannt worden, teilte der Vorstand des ChatGPT-Entwicklers mit. Altman werde auch den Posten im Vorstand aufgeben und OpenAI verlassen.

Nach einer internen Überprüfung sei man zu dem Schluss gekommen, dass Altmans Kommunikation mit dem Vorstand nicht immer ehrlich gewesen sei, schrieb der Vorstand in einem Blogbeitrag. Dies habe den Vorstand behindert, seine Aufgaben wahrzunehmen. Man habe deshalb kein Vertrauen mehr in Altmans Fähigkeit, OpenAI weiter leiten zu können. Der Vorstand sei weiterhin vollends der Mission verpflichtet, die gesamte Menschheit von künstlicher Intelligenz profitieren zu lassen.

[…]

  • Sibbo
    link
    fedilink
    Deutsch
    arrow-up
    2
    ·
    7 months ago

    OpenAI ist nonprofit? Wirklich? Machen die dann auch irgendwann ihre Trainingsdaten öffentlich? Oder behalten die die Daten, und machen irgendwann for-profit draus, um dann abzukassieren? Und nutzen “nonprofit” nur, um das Vertrauen der Nutzer zu stehlen?

    • Haven5341@feddit.deOP
      link
      fedilink
      Deutsch
      arrow-up
      3
      ·
      edit-2
      7 months ago

      OpenAI ist nonprofit?

      Ja. Die haben den For-Profit Zweig gegründet um die Forschung zu finanzieren.

      We founded the OpenAI Nonprofit in late 2015 with the goal of building safe and beneficial artificial general intelligence for the benefit of humanity. A project like this might previously have been the provenance of one or multiple governments—a humanity-scale endeavor pursuing broad benefit for humankind.

      Seeing no clear path in the public sector, and given the success of other ambitious projects in private industry (e.g., SpaceX, Cruise, and others), we decided to pursue this project through private means bound by strong commitments to the public good. We initially believed a 501©(3) would be the most effective vehicle to direct the development of safe and broadly beneficial AGI while remaining unencumbered by profit incentives. We committed to publishing our research and data in cases where we felt it was safe to do so and would benefit the public.

      We always suspected that our project would be capital intensive, which is why we launched with the goal of $1 billion in donation commitments. Yet over the years, OpenAI’s Nonprofit received approximately $130.5 million in total donations, which funded the Nonprofit’s operations and its initial exploratory work in deep learning, safety, and alignment.

      It became increasingly clear that donations alone would not scale with the cost of computational power and talent required to push core research forward, jeopardizing our mission. So we devised a structure to preserve our Nonprofit’s core mission, governance, and oversight while enabling us to raise the capital for our mission:

      • The OpenAI Nonprofit would remain intact, with its board continuing as the overall governing body for all OpenAI activities.
      • A new for-profit subsidiary would be formed, capable of issuing equity to raise capital and hire world class talent, but still at the direction of the Nonprofit. Employees working on for-profit initiatives were transitioned over to the new subsidiary.
      • The for-profit would be legally bound to pursue the Nonprofit’s mission, and carry out that mission by engaging in research, development, commercialization and other core operations. Throughout, OpenAI’s guiding principles of safety and broad benefit would be central to its approach.
      • The for-profit’s equity structure would have caps that limit the maximum financial returns to investors and employees to incentivize them to research, develop, and deploy AGI in a way that balances commerciality with safety and sustainability, rather than focusing on pure profit-maximization.
      • The Nonprofit would govern and oversee all such activities through its board in addition to its own operations. It would also continue to undertake a wide range of charitable initiatives, such as sponsoring a comprehensive basic income study, supporting economic impact research, and experimenting with education-centered programs like OpenAI Scholars. Over the years, the Nonprofit also supported a number of other public charities focused on technology, economic impact and justice, including the Stanford University Artificial Intelligence Index Fund, Black Girls Code, and the ACLU Foundation.

      In that way, the Nonprofit would remain central to our structure and control the development of AGI, and the for-profit would be tasked with marshaling the resources to achieve this while remaining duty-bound to pursue OpenAI’s core mission. The primacy of the mission above all is encoded in the operating agreement of the for-profit, which every investor and employee is subject to: Link zum Operating Agreement

      https://openai.com/our-structure

      Und nutzen “nonprofit” nur, um das Vertrauen der Nutzer zu stehlen

      Sam Altman ist der Entrepreneur und Investor und er ist unter Führung des Chef-Wissenschaftlers entlassen worden.