• conciselyverbose@kbin.social
    link
    fedilink
    arrow-up
    16
    arrow-down
    5
    ·
    10 months ago

    The paper [PDF], which includes voices from numerous academic institutions and several from OpenAI, makes the case that regulating the hardware these models rely on may be the best way to prevent its misuse.

    Fuck every single one of them.

    No, restricting computer hardware is not acceptable behavior.

      • conciselyverbose@kbin.social
        link
        fedilink
        arrow-up
        14
        arrow-down
        1
        ·
        edit-2
        10 months ago

        Because it’s insane, unhinged fear mongering, not even loosely connected to anything resembling reality. LLMs do not have anything in common with intelligence.

        And because the entire premise is an obscene attempt to monopolize hardware that literal lone individuals should have as much access to as they can pay for.

        The only “existential threat” is corporations monopolizing the use of simple tools that anyone should be able to replicate.

        • Spiralvortexisalie@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          10 months ago

          Companies like OpenAI are only engaging in these discussions to engage in regulatory capture. It does look odd that OpenAI’s board got rid of Altman for ethical concerns, he launched a coup to usurp them, then started implementing dubious changes such as ending their prohibition on war use. After letting Altman run amok, people on OpenAI’s payroll (the researchers) believe that the regular consumer’s access to LLMs need either a remote control kill switch or should require pre approval from a yet to be determined board of “AI Leaders”