Hello folks. I want to hear your opinions about the advances in AI and how it makes you feel. This is a community about privacy, so I already kind of know that you’re against it, at least when AI is implemented in such a way that it violates peoples’ privacy.

I recently attended a work-related event and the conclusion was that AI will come and change everything in our field. A field which has been generally been dominated by human work, although various software has been used for it. Without revealing too much, the event was for people who with texts. I’m a student, but the event was for people working in the field I plan to work in in the future. The speakers did not talk about privacy concerns (not in detail, at least) or things such as micro work (people who get paid very little to clean illegal content in AI training data, for example).

You probably can guess that that I care about privacy: I’m writing this on Lemmy, for a privacy community. I’m a Linux user (the first distro I used was Ubuntu 10.04) and I transitioned to Linux as my daily driver in November last year. I care about the Open Source community (most of the programs I used on Windows were FOSS). I donate to the programs I use. I use a privacy-respecting search engine, use uBlock and Privacy Badger on Firefox. I use a secure instant messenger and detest Facebook. But that’s where it ends, because I use a stock Android phone. But at least I care about these things and I’m eager to learn more. When it comes to privacy, I’m pretty woke, for the lack of a better word.

But AI is coming, or rather, it’s already here. Granted, people who talked at that event were somewhat biased, as they worked in the AI industry, so even if they weren’t marketing ChatGPT, they were trying to hype up the industry. But apparently, AI can already help so called knowledge workers. It can help in brainstorming and generating ideas. It can produce translations, it can summarize texts, it can give tips…

The bottom line seems to be that I need to start using AI, because either I will use it and keep my job in the future, or I will not use it and risk being made redundant by AI at some point in time.

But I want to get other perspectives. What are your views on AI, and has it affected your job, and if so, how? I know some people have said here that AI is just a bunch of algorithms and that it’s just hype and that the bubble will burst eventually. But until it does, it seems it’ll have a pretty big impact on how things work. Can we choose to ignore it?

  • The Doctor@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    I think it’s interesting that limited AI technology has made it to street level. There was talk of keeping it entirely in-house as a “secret sauce” for competitive advantage (I used to work for one of the companies that was working on large-scale practical LLM), so when OpenAI started gaining notice it raised an eyebrow.

    Security-wise it’s a pretty big step backward, because the code it hashes together tends to have older vulns in it. It’s not like secure software development practices are commonly employed right now anyway. I’m not sure when that’s going to become a huge problem, but it’s just a matter of time.

    One privacy compromising problem has already been stumbled over (ChatGPT could be tricked into dumping its memory buffers containing other conversations into a chat session) and there will undoubtedly be more in the future. This also has implications for business uses (because folks are already putting sensitive client information into chats with LLMs, which means it’s going to leak eventually).

    I really hope that entirely self-hosted LLMs become common and easy to deploy. If nothing else, they’re great for analyzing and finding stuff in your personal data that other forms of search aren’t well suited for. Then again, I hoard data so maybe I’m projecting a little here.

    As for my job, I’m of two minds about it. LLMs can already be used for generating boilerplate for scripts, Terraform plans, and things like that (but then again, keeping a code repo of your own boilerplate files is a thing, or at least it used to be). It might be useful for rubber ducking problems (see also, privacy compromise).

    It wouldn’t surprise me if LLMs become a big reason for layoffs, if they’re not already. LLMs don’t have to be paid, don’t have tax overhead, don’t get sick, don’t go BOFH, and don’t unionize. The problem with automating yourself out of a job is that you no longer have a job, after all. So I think it’s essential for mighty nerds to invest the time into learning a trade or two just in case (I definitely am - companies might be shooting themselves in the foot by laying off their sysadmins, but if it means bigger profits for shareholders they’ve demonstrated that they’re more than happy to do so).