Hello folks. I want to hear your opinions about the advances in AI and how it makes you feel. This is a community about privacy, so I already kind of know that you’re against it, at least when AI is implemented in such a way that it violates peoples’ privacy.

I recently attended a work-related event and the conclusion was that AI will come and change everything in our field. A field which has been generally been dominated by human work, although various software has been used for it. Without revealing too much, the event was for people who with texts. I’m a student, but the event was for people working in the field I plan to work in in the future. The speakers did not talk about privacy concerns (not in detail, at least) or things such as micro work (people who get paid very little to clean illegal content in AI training data, for example).

You probably can guess that that I care about privacy: I’m writing this on Lemmy, for a privacy community. I’m a Linux user (the first distro I used was Ubuntu 10.04) and I transitioned to Linux as my daily driver in November last year. I care about the Open Source community (most of the programs I used on Windows were FOSS). I donate to the programs I use. I use a privacy-respecting search engine, use uBlock and Privacy Badger on Firefox. I use a secure instant messenger and detest Facebook. But that’s where it ends, because I use a stock Android phone. But at least I care about these things and I’m eager to learn more. When it comes to privacy, I’m pretty woke, for the lack of a better word.

But AI is coming, or rather, it’s already here. Granted, people who talked at that event were somewhat biased, as they worked in the AI industry, so even if they weren’t marketing ChatGPT, they were trying to hype up the industry. But apparently, AI can already help so called knowledge workers. It can help in brainstorming and generating ideas. It can produce translations, it can summarize texts, it can give tips…

The bottom line seems to be that I need to start using AI, because either I will use it and keep my job in the future, or I will not use it and risk being made redundant by AI at some point in time.

But I want to get other perspectives. What are your views on AI, and has it affected your job, and if so, how? I know some people have said here that AI is just a bunch of algorithms and that it’s just hype and that the bubble will burst eventually. But until it does, it seems it’ll have a pretty big impact on how things work. Can we choose to ignore it?

  • walter_wiggles@lemmy.nz
    link
    fedilink
    arrow-up
    26
    ·
    11 months ago

    I’m waiting for ChatGPT to start slipping in product recommendations/mentions into its responses. It’s only a matter of time before ads ruin whatever good is in AI.

    • muntedcrocodile@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      Thry wont put that into the api and i exclusivly use foss tools that utilise the api. For just regular chat i can reccommend betterchatgpt github will host an instance for u for free. Api does have some costs but u get the newer models far cheaper than chatgpt+

  • viking@infosec.pub
    link
    fedilink
    arrow-up
    14
    ·
    11 months ago

    I’m in the medical device field, and user error is the most common patient killer. No matter how many treatment recommendations you put into the UI, Dr. Smartass overrides it all and then you have a casualty. Can’t wait for AI to fix stupid.

    • Deckweiss@lemmy.world
      link
      fedilink
      arrow-up
      15
      arrow-down
      1
      ·
      edit-2
      11 months ago

      At the radiology clinic where my dad worked, they had a trial with image recognition trained on detecting stuff in MRI images. The AI would draw a red cirlce around every suspicious place it detected.

      What they noticed is that the doctors started to only look at the red cirlces and would miss a lot more of the non-obvious nuances. Which resulted in more completely wrong diagnosis and a lower diagnosis quality overall.

      So I doubt that it will fix stupid for now. Even if it is implemented as a sanity check review, after the doctor has done his work, they might get more sloppy when relying on the AI check to catch their oversight.

      Afaik the best way to improve quality of a doctors work is longer education and more worktime per patient. Or more rigorous processes where multiple doctors have to give their independent analysis on any patient. But any of that is too expensive for profit oriented commercial clinics.

      Sadly it is more economically viable to diagnose as quickly as possible, let some patients die due to errors and fight a lawsuit, then to employ twice as many highly skilled doctors.

  • shootwhatsmyname@lemm.ee
    link
    fedilink
    English
    arrow-up
    12
    ·
    11 months ago

    I’m already using AI for coding. It helps me find AND fix bugs much faster, while teaching me exactly what I did wrong and why the solution works. It’s insane.

    I think the only thing that could really stop or slow down AI’s impact on jobs would be some sort of large economic crash, war, or a major supply chain issue with computing parts. It’s proving to have actual, real world use cases now in many lines of work. And the sky’s the limit

    • taladar@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      11 months ago

      I think what might stop it is the end of the free cloud AIs when the ones running those realize they are losing money that way. AI uses up a ridiculous amount of computing resources for what it does so unless we manage to optimize it better soon it might go away again in many areas where it is not really needed and/or be replaced with more traditional approaches to solve the same problems.

  • BlahajChompies@feddit.de
    link
    fedilink
    arrow-up
    11
    ·
    11 months ago

    I work in university admissions and the programs require a motivation letter. While absolutely hating writing Cover letters or motivations myself, I do see the advantages for admission (although I absolutely hate the system).

    Mainly it is a great way to give applicants with weaker grades a shot. And a good motivation letter where I get a feeling for who they are will put them almost always automatically higher in my recommendations. However, I am so sick of the same ChatGPT motivation. And it is always the same. Oh you honed your ability to do this? Your drawn because of that? I have read your letter 50 times before. And I don’t mean the contents. Let’s be real, most do not have an inspiring story about why they want to study, and that is okay, the program sounding good is a perfectly valid reason. But show me who you are (or what you want me to think who you are). I really developed an adverse reaction to these AI letters. I hate them because I know I’m reading a robots “thoughts”. By all means use the tools available to polish but don’t polish out your personality.

    This will lead to motivation letters being abolished. And while for most people that’s great and a CV should speak for itself it will remove chances to get into a prestigious program for people who are not perfect or had the luck to grow up rich.

    • taladar@sh.itjust.works
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      11 months ago

      That whole motivation letter thing honestly sounds more like AI exposing a flaw in the education system and less like a problem with AI in general.

      You might frame it as people who are not perfect getting a chance but I would frame it as people who are better at words than at exams getting an edge. The genius but socially awkward person still has no chance because the exams bored them to tears and their anxiety prevented them from writing the letter still won’t get in.

      • phdepressed@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        11 months ago

        Boredom is an excuse, reality is no matter where or what you work as there will be boring things involved at some point to some degree. We are hundreds of years past when nobles would sponsor some eclectic dude to do weird science/art just to say they were that weirdos sponsor. You have to be able to work past boredom to function in society.

        A “genius” who can’t even write a letter isn’t meaningful. How can they communicate their ideas and thoughts if they can’t write a letter? If Newton never published Principia would we know him? No, we’d have to wait for the guy who could talk and write.

        • taladar@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          11 months ago

          Counterpoint, most of these social norms, particularly related to academic institutions, are really not about knowledge or skill at all but just to build up a tolerance for the bullshit that office politics, following the letter of instructions without thinking ourselves to stroke a customer/manager’s ego and similar things completely unrelated to the actual productivity of companies put us through later in life.

  • nis@feddit.dk
    link
    fedilink
    arrow-up
    6
    ·
    11 months ago

    I think AI will help alot with the boring stuff, and leave the bigger/interesting/more creative work to us. It will take some to work all this out, though.

    When i went to school as a kid, the degree i got at university didn’t exist yet. When i finished university the job i now have didn’t exist yet. The world has always, and will always, change.

  • JASN_DE@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 months ago

    I simply cannot see how using a non-locally running and basically contained AI would work with the secrecy requirements in the (wider) engineering fields. There would certainly be situations where it could help, e.g. the mentioned translation work. Sure, you’d still need an actual human to check what the AI produced, but I can see time savings in those areas.

    Many programs used in those fields already use algorithms, rule and filter sets in the daily workflow, so maybe that could be further improved. But overall? No, very unlikely to work.

  • Sims@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    On the big scale there’s only one main concern for the current system: Can people adapt to new knowledge/functions as fast as changes occur. If they cannot, the labor market collapse. It took a while to adapt to cars but people succeeded, and someone might suggest that we can do the same now. I doubt that.

    We are already in a very accelerated world compared to then, and whats worse is that the AI boom has just started and will accelerate faster and faster. ALL levels of the entire AI tech stack is accelerating. Hardware, algorithms, models, cognitive networks (agents), and a shitload of new papers every single day. All big tech are using current AI on all levels to accelerate development of the next AI, which will dev the successor etc.

    Besides that, a lot of other global events will push the system towards a transition to something different.

    Slowness in adopting AI in bizz are perhaps the only delay workers can hope for, so imho you can only prolong your current job and try to adapt as far as you can, not keep it. Timeline is difficult tho.

    • ToxicWaste@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      While i agree with most of what you said, i think you might be falling into the trap of assuming the curve continues as it had.

      Like most technology, ANNs will follow a sigmoid curve. Turing was already working with the same theories. While I did my education in IT, we had really interesting ANNs working, but only nerds would be excited by them. Now ChatGPT surprised the rest of the world and I would assume we are in the steep part of the sigmoid function.

      But the problem is, that we can only determine where we where, if we look back. There is no way to say whether NOW is just the start, middle or towards the end of the curve.

      What I can say is that now, LLMs and other implementations of AI are able to replace a trainee in my line of work. They still need a lot of supervision and are a tool, which can speed up work. This may lead to other problems: If companies decided to not take on the expensive task of training people and replacing them with cheaper AI - at some point we will run out of well trained veterans.

  • phdepressed@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    11 months ago

    I’m in life sciences and AI was recently disallowed for grant writing or papers because of IP concerns. Additionally the chance of it hallucinating fake papers while unable to evaluate the real ones it trawls through make it difficult at professional level. ML is very helpful in certain design/prediction/ measurement areas but I’m not worried that these type of AI will steal a job. I am a bit worried that learning via these ai will cause issues though.

  • technomad@slrpnk.net
    cake
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    What I do for work is very niche, so imagining exactly how it will be affected is kind of difficult. There is design work above me, which very well could be affected. I kind of get the impression that the advancements of AI will possibly lock out any kind of lateral moves that I might be able to make…

    Automation would be a bigger concern for what I currently do, but the robots still have a ways to go (I hope).

  • The Doctor@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    I think it’s interesting that limited AI technology has made it to street level. There was talk of keeping it entirely in-house as a “secret sauce” for competitive advantage (I used to work for one of the companies that was working on large-scale practical LLM), so when OpenAI started gaining notice it raised an eyebrow.

    Security-wise it’s a pretty big step backward, because the code it hashes together tends to have older vulns in it. It’s not like secure software development practices are commonly employed right now anyway. I’m not sure when that’s going to become a huge problem, but it’s just a matter of time.

    One privacy compromising problem has already been stumbled over (ChatGPT could be tricked into dumping its memory buffers containing other conversations into a chat session) and there will undoubtedly be more in the future. This also has implications for business uses (because folks are already putting sensitive client information into chats with LLMs, which means it’s going to leak eventually).

    I really hope that entirely self-hosted LLMs become common and easy to deploy. If nothing else, they’re great for analyzing and finding stuff in your personal data that other forms of search aren’t well suited for. Then again, I hoard data so maybe I’m projecting a little here.

    As for my job, I’m of two minds about it. LLMs can already be used for generating boilerplate for scripts, Terraform plans, and things like that (but then again, keeping a code repo of your own boilerplate files is a thing, or at least it used to be). It might be useful for rubber ducking problems (see also, privacy compromise).

    It wouldn’t surprise me if LLMs become a big reason for layoffs, if they’re not already. LLMs don’t have to be paid, don’t have tax overhead, don’t get sick, don’t go BOFH, and don’t unionize. The problem with automating yourself out of a job is that you no longer have a job, after all. So I think it’s essential for mighty nerds to invest the time into learning a trade or two just in case (I definitely am - companies might be shooting themselves in the foot by laying off their sysadmins, but if it means bigger profits for shareholders they’ve demonstrated that they’re more than happy to do so).

  • BaumGeist@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    11 months ago

    Should you ignore any give AI? Yes.

    Can you? Also yes. Except the one your employer gets duped by.

    Should you ignore the technological revolution that is Machine Learning Algorithms in general? No, not if you’re willing to use other tech anyway despite its negative impact on privacy.

    Can you? Also probably no, not if you want to eat.

    Has AI affected my job? No, not yet. Well, not directly, although now every vendor uses AI to deal with customer service. If I worked at a larger company in my field, they’d probably include AI somewhere in the process.

    My thoughts on it all: let’s use the correct descriptor, Machine Learning Algorithms, since “AI” is just a marketing term to generate hype. I like MLAs, they’re a neat tool and cool toy. It’s also possible to own and run your own on your own PC in the privacy of your own home. Do that. Run the models, generate conrent, learn how to use the tool, learn the CS and math theory behind it, understand it, have fun. Be a scientist, learn by doing, get your hands dirty, understand that which you fear. Oftentimes our fears really just boil down to our lack of understanding.

    We’re in a painful growth stage rn. Operators are stillbtesting boundaries, and those of us affected are trying tonfind ways to reassert those boundaries. Whether it’s enhanced tracking algorithms, harvesting data for training, or stealing intellectual property, it’s all boundary testing. Give it a few years, and there will be more compromise and it will seem more mundane to see MLAs in the wild. So it’s best to make peace with them now than to be that boomer that still refuses to learn how to use the internet.

    Or if you prefer the privacy-oriented incentives: it’s called “Adversarial Machine Learning” and it’s cool as fuck. Sometimes it’s about figuring out how to craft inputs to exploit a MLA, other times it using your own MLA to fuck with someone else’s.

    The point is: you don’t learn anything by sitting around pontificating, you learn by engaging with things. If you want to learn about me or the users of c/Privacy, this is a great way. If you want to have your fears validated, this is a great way. If you want to grow as a person, lead your best life, and not be ruled by fear, then the only way is to learn about things you don’t already understand even if—no—especially if it’s things that are used to do evil.

  • lapislazuliOP
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    11 months ago

    I can’t write much today, but I just want to thank everyone for their input. I know that AI means different things for different professions and different people. In cording, it can be quite helpful. But in a language-based profession, it can be problematic, because it can output fluent and convincing language, while getting all the facts wrong. Or it can sound very artistic, but if you look at it closer, it’s not all that original, or the language might become impoverished, and so on and so forth. In tedious and repetitive jobs people are perhaps more willing to give over to AI. Which is what robots are doing.

    I’ll read your replies more closely tomorrow and reply to each one, if I can. Thanks for the discussion!

  • nyakojiru@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    None. Someone has to still do the stupid and complex manual things from stupid corporate software that runs stupid corporations

  • Daxtron2@startrek.website
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    I have been using LLMs for code generation since the initial release of chatgpt and it has massively improved my quality of life at work. More recently I’ve been testing out local 7b and 14b LLMs for code generation, which while not nearly as good as the API based ones, are still good enough for basic tasks like line completion.