Archived link

Opinionated article by Alexander Hanff, a computer scientist and privacy technologist who helped develop Europe’s GDPR (General Data Protection Regulation) and ePrivacy rules.

We cannot allow Big Tech to continue to ignore our fundamental human rights. Had such an approach been taken 25 years ago in relation to privacy and data protection, arguably we would not have the situation we have to today, where some platforms routinely ignore their legal obligations at the detriment of society.

Legislators did not understand the impact of weak laws or weak enforcement 25 years ago, but we have enough hindsight now to ensure we don’t make the same mistakes moving forward. The time to regulate unlawful AI training is now, and we must learn from mistakes past to ensure that we provide effective deterrents and consequences to such ubiquitous law breaking in the future.

  • Sas [she/her]@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    There’s a reason why in my comment i talked about LLMs as bad while saying AI in general has it’s uses. The reason being this post being about LLMs.

    I know very well that specialized AI has a lot of uses in medical science and other fields but that’s not really what got hit with all the hype, is it? The hype is managers saw a language model give seemingly better answers to questions than John Rando from 2 blocks down the road so they’re now looking to cut out all the already low paid workers and spoiler alert we will not land in a society where the general public profits from not having work. It will be the same owners of capital profiting as per usual.

    • teawrecks
      link
      fedilink
      arrow-up
      2
      ·
      20 hours ago

      we will not land in a society where the general public profits from not having work. It will be the same owners of capital profiting as per usual.

      If we do nothing, sure. I’m suggesting, like the article, that we do something.

      The only sentiment I took issue with was the poster above who suggested that somehow the solution would be to delete/destroy illegally trained networks. I’m just saying that’s not practical nor progressive. AI is here to stay, we just need to create legislature that ensures it works for us, especially when it couldn’t have been built without us.