Feel like we’ve got a lot of tech savvy people here seems like a good place to ask. Basically as a dumb guy that reads the news it seems like everyone that lost their mind (and savings) on crypto just pivoted to AI. In addition to that you’ve got all these people invested in AI companies running around with flashlights under their chins like “bro this is so scary how good we made this thing”. Seems like bullshit.
I’ve seen people generating bits of programming with it which seems useful but idk man. Coming from CNC I don’t think I’d just send it with some chatgpt code. Is it all hype? Is there something actually useful under there?
First of all AI is a buzzword that’s meaning has changed a lot since at least the 1950s. So… what do you actually mean? If you mean LLM like ChatGPT, it’s not AGI that’s for sure. It is another tool that can be very useful. For coding, it’s great for getting you very large blocks of code prepopulated for you to polish and verify it does what you want. For writing, it’s useful to create a quick first draft. For fictional game senses it’s useful for “embedding a character quickly”, but again you likely want to edit it some even for say a D&D game.
I think it can replace most first line chat based customer service people, especially ones who already just make stuff up to say something to you (we all have been there). I could imagine it improving call routing if hooked into speech recognition and generation - the current menus act like you can “say anything” but really only “work” if you’re calling about stuff you could also do with simple press 1,2,3 menus. ChatGPT based things trained on the companies procedures and data probably could also replace that first line call queues because it can seem to more usefully do something with wider issues. Although companies still would need to get their head out of their asses somewhat too.
Where I’ve found it falls down currently is very specific technical questions, ones you might have asked on a forum and maybe gotten an answer. I hope it improves, especially as companies start to add some of their own training data. I could imagine Microsoft more usefully replacing the first few lines of tech support for their products, and eventually having the AI pass up the chain to a ticket if it can’t solve the issue. I could imagine in the next 10 years most tech companies having purchased a service from some AI company to provide them AI support bots like they currently pay for ticket systems and web hosting. And I think in general it probably will be better for the users, because for less than the cost of the cheapest outsourced front line support person (who has near 0 knowledge) you can have the AI provide pretty good chat based access to a given set of knowledge that is growing all the time, and every customer gets that AI with that knowledge base rather than the crap shoot of if you get the person who’s been there 3 years or 1 day.
I think we are a long way from having AI just write the program or CNC code or even important blog posts. The hallucination has to be fixed without breaking the usefulness of the model (people claim guardrails on GPT4 make it stupider), and the thing needs to recursively look at it’s output and run that through a “look for bugs” prompt followed by a “fix it” prompt at the very least. Right now, it can write code with noticeable bugs, you can tell it to check for bugs and it’ll find them, and then you can ask it to fix those bugs and it’ll at least try to do that. This kind of needs to be built in and automatic for any sort of process - like humans check their work, we need to program the AI to check it’s work too. And then we might need to also integrate multiple different models so “different eyes” see the code and sign off before being pushed. And even then, I think we’d need additional hooks, improvement, and test / simulation passes before we “don’t need human domain experts to deploy”. The thing is - it might be something we can solve in a few years with traditional integrations - or it might not be entirely possible with current LLM designs given the weirdness around guardrails. We just don’t know.
AI hasn’t really changed meaning since the 50s. It has always been the field of research about how to make computers perform tasks that previously were limited to only humans. The target is always moving because once AI researchers figure out how to solve one task with computers it’s no longer limited to humans anymore. It gets reduced to “just computations”.
There’s even a Wikipedia page describing this phenomenon: https://en.wikipedia.org/wiki/AI_effect
AGI is the ultimate goal of AI research. That’s when there’s no more tasks left that only humans can do.
I mean, you’re pointing out what I am - that over time AI has referred to very different technologies and capabilities.