It seems most people are on board with the idea that AI will change the world. While I agree it having some impact, I also think it is overinflated by marketing. Operating an AI takes huge computing power, which costs heaps of money and energy. So how are people suggesting that exponential improvement is feasible? I do not get it.
Further, aren’t we supposed to reduce energy usage? Why are we trying to overspend what little is left? I hate how this is taking priority over the environment.
Creating this post mainly to rant, I thought OpenAI firing Sam Altman was a signal for a reality check. It seems they are wrapping it up and trying to rehire him though… What a drama.
Some of it is overinflated marketing, but for organizations trying to cut costs it could have a significant effect on a lot of their employees.
AI doesn’t need to be good. It just needs to be cheaper and good enough.
So most people are assuming AI will do all the work of a job. Maybe it will someday, but my experience today with it appears to be able to do 80% of the work with only 20% human effort put in. So no, its not doing 100% of the work, it doing 80%, but it does that 80% in seconds for what used to take me hours or days.
That is a huge improvement over no AI use at all.
Improvement for who
it appears to be able to do 80% of the work with only 20% human effort put in. So no, its not doing 100% of the work, it doing 80%,
I think that’s the calculation most organizations will make. If AI can do 80% of a job, they can fire 80-90% of their employees in that task, and use the remainder as AI wranglers.
That’s a pretty significant workforce reduction, and it means the folks who remain employed spend less of their time doing what they trained for, and more time in an IT/management role.
Yea, I mostly mean the AGI nonsense. There are jobs where AI is helpful - tho imo it is worthy to point out that not all of it is purely benefit of AI.
It’s unsustainable right now because hardware and software are not aligned (yet). Software is currently out pacing hardware, but there are loads of companies working on specialist chips that will deal with the computing power problem and the energy consumption problem just by the shear factor of optimization benefits.
Plus, software optimizations are also well under way and models are always being fine tuned to run better/train better with less
I doubt how good results that could achieve. I agree that 10~100 times improvement is feasible by optimizing the hardware. But the hardware in general need to be improved, yet the impenetrable barrier light speed is blocking on the way.
And more complete AI systems should require hundreds of thousands times the computation power. Really, this has the same issue as bitcoin.
I think the specialized hardware for this task will be better than you expect. It’s like using a sledgehammer to carve something. Pretty soon a chisel will be given to the computer and it will be able to do its job much easier.
I doubt it since GPU was already not a bad tool for this job. The generality of GPGPU helped a lot here.
Id argue this isn’t unpopular to anyone who knows that “AI” is just pattern matching and marketing to people who don’t understand tech.
People should actively be skeptical.
deleted by creator
Classic Gartner hype cycle:
We’re in the Peak of Inflated Expectations phase.
https://upload.wikimedia.org/wikipedia/commons/thumb/9/94/Gartner_Hype_Cycle.svg/1200px-Gartner_Hype_Cycle.svg.pngMeanwhile…
https://www.theregister.com/2023/10/11/github_ai_copilot_microsoft/
[…] while Microsoft charges $10 a month for the service, the software giant is losing $20 a month per user on average and heavier users are costing the company as much as $80 […]
Mmm hmmm.
This could be one form of “course correction”; few people are going to care to participate if they’re forced to pay what it actually costs.
I suspect this is all part of the long term plan; provide the service at a reduced fee so people gain reliance on the tech, then increase the cost over time. We see this happen everywhere.
The “current gen AI” is the key here. How sustainable it is depends on how quickly it can grow and improve. Technology is growing much faster than in the past. I remember getting a dictation program in 1998. I had to spend 2 hours talking to it so it could learn my voice. Even after all that, it still only had about a 25% success rate in properly transcribing my text. In 2015 I bought my first smart watch. The first voice transcription I made from it was 100% correct with absolutely no learning of my voice at all.
I believe that the LLM will quickly give way to a different type of AI. There may be several different approaches to AI before something really takes hold and changes the game.
Operating an AI takes huge computing power.
For now. There are already plans to accelerate some specific machine learning workloads on next generations of low powered mobile chips. Think ChatGPT on a smartphone.
For other use-cases, you don’t even need to wait. Google Coral can do object recognition for your security camera feed, using minuscule amount of power, compared to a GPU.
This is definitely true, but keep in mind that there is a limit to how far you can optimize a chip. Eventually we could have everything running on ASICs, but electronics do have a maximum speed that we may not be far from reaching.