High skilled jobs will just start using AI as a tool to automate routine (or have already started, in some cases). The most efficient use of AIs we have now is to pair it with a human, anyway
The worry is focused on the amount of damage that is likely to be done by the people in decision-making positions thinking they can save money by removing more paid positions.
I never understood this? How could the CEO be replaced? Who would be controlling the AI? Whould’t that person just be the new CEO? I have so many questions…
The problem with humans reviewing AI output is that humans are pretty shit at QA. Our brains are literally built to ignore small mistakes. Digging through the output of an AI that’s right 95% of the time is nightmare fuel for human brains. If your task needs more accuracy, it’s probably better to just have the human do it all, rather than try to review it.
Then each QA human will be paired with a second AI that will catch those mistakes the human ignores. And another human will be hired to watch that AI and that human will get an AI assistant to catch their mistakes.
Eventually they’ll need a rule that you can only communicate with the human/AI directly above you or below you in the chain to avoid meetings with entire countries of people.
Should note that a lot of the Microsoft Recall project revolves around capturing human interactions on the computer in real time continuously, with the hope of training a GPT-5 model that can do basic office tasks automagically.
Will it work? To some degree, maybe. It’ll definitely spit out some convincing looking gibberish.
But the promise is to increasingly automate away office and professional labor.
I think AI can take far fewer jobs than people will try to replace with AI, that’s kind of the issue
High skilled jobs will just start using AI as a tool to automate routine (or have already started, in some cases). The most efficient use of AIs we have now is to pair it with a human, anyway
The worry is focused on the amount of damage that is likely to be done by the people in decision-making positions thinking they can save money by removing more paid positions.
Companies will save so much money once they decide to replace their CEOs with AIs…
Tbf most could do it for cheaper with a dartboard and some post-its
I never understood this? How could the CEO be replaced? Who would be controlling the AI? Whould’t that person just be the new CEO? I have so many questions…
The shareholders would do a ‘Twitch plays’ on the ai
Look, I already got the algorithm written right here!
The problem with humans reviewing AI output is that humans are pretty shit at QA. Our brains are literally built to ignore small mistakes. Digging through the output of an AI that’s right 95% of the time is nightmare fuel for human brains. If your task needs more accuracy, it’s probably better to just have the human do it all, rather than try to review it.
Then each QA human will be paired with a second AI that will catch those mistakes the human ignores. And another human will be hired to watch that AI and that human will get an AI assistant to catch their mistakes.
Eventually they’ll need a rule that you can only communicate with the human/AI directly above you or below you in the chain to avoid meetings with entire countries of people.
Should note that a lot of the Microsoft Recall project revolves around capturing human interactions on the computer in real time continuously, with the hope of training a GPT-5 model that can do basic office tasks automagically.
Will it work? To some degree, maybe. It’ll definitely spit out some convincing looking gibberish.
But the promise is to increasingly automate away office and professional labor.
“Take this code and give me jest tests with 100% coverage. Don’t describe, don’t scaffold, full output.”
Saves me hours.
Oh, don’t worry, the errors you see will go away quickly, assuming they aren’t a feature.
Basically it is going the following way:
AI isn’t magic, no matter how much techbros try to humanize the technology because NeuRAl nEtWOrKs.