There are lots of articles about bad use cases of ChatGPT that Google already provided for decades.

Want to get bad medical advice for the weird pain in your belly? Google can tell you it’s cancer, no problem.

Do you want to know how to make drugs without a lab? Google even gives you links to stores where you can buy the materials for it.

Want some racism/misogyny/other evil content? Google is your ever helpful friend and garbage dump.

What’s the difference apart from ChatGPT’s inability to link to existing sources?

Edit: Just to clear things up. This post is specifically not about the new use cases that come from AI. Sure, Google cannot make semi-non-functional mini programs automatically, and Google will not write a fake paper in whole for me. I am specifically talking about the “This will change the world” articles, that mirror stuff that Google can do exactly like ChatGPT can.

  • ConsciousCode@beehaw.org
    link
    fedilink
    arrow-up
    42
    ·
    1 year ago

    The hype cycle around AI right now is misleading. It isn’t revolutionary because of these niche one-off use-cases, it’s revolutionary because it’s one AI that can do anything. Problem with that is what it’s most useful for is boring for non-technical people.

    Take the library I wrote to create “semantic functions” from natural language tasks - one of the examples I keep going to in order to demonstrate the usefulness is

    @semantic
    def list_people(text) -> list[str]:
        '''List the people mentioned in the given text.'''
    

    8 months ago, this would’ve been literally impossible. I could approximate it with thousands of lines of code using SpaCy and other NLP libraries to do NER, maybe a dictionary of known names with fuzzy matching, some heuristics to rule out city names or more advanced sentence structure parsing for false positives, but the result would be guaranteed to be worse for significantly more effort. Here, I just tell the AI to do it and it… does. Just like that. But you can’t hype up an algorithm that does boring stuff like NLP, so people focus on the danger of AI (which is real, but laymen and news focus on the wrong things), how it’s going to take everyone’s jobs (it will, but that’s a problem with our system which equates having a job to being allowed to live), how it’s super-intelligent, etc. It’s all the business logic and doing things that are hard to program but easy to describe that will really show off its power.

    • pain_is_life_is_pain@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      I’m both really excited and worried about the part where AI takes over so many jobs that enough people will be without work. I wonder how society will deal with that, will everyone get a proper base “salary” for existing or will there be huge refuge-like camps for the poor jobless people?

      • Roland@beehaw.org
        link
        fedilink
        arrow-up
        9
        ·
        1 year ago

        Under capitalism, i fear automationing will mean people losing their jobs only have worse, often dangerous opinions, that machines could do, meanwhile entertainment and the like will be flooded by even shittier quality AI made crap. I can only pray it will mean everyone’s basic needs being covered, but that requires a huge shift

        • interolivary@beehaw.org
          link
          fedilink
          arrow-up
          7
          ·
          1 year ago

          Yeah, I just don’t see that happening. The whole “western” world is taking hard turn to the right, and that’s not going to get better any time soon.

      • BubblyMango@lemmy.wtf
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        The problem is that the ones that will benefit from AI taking over are the big companies that create such AIs - Google, Meta, Apple. They will grow exponentially by having AIs work for them 24/7. So its not like himanity as a whole will grow, it will just be these companies, and they will slowly become the rulers of humanity.