It will, and is helping humanity in different fields already.
We need to diverge PR speech from reality. AI is already being used in pharmaceutical fields, aviation, tracking (of the air, of the ground, of the rains…), production… And there is absolutely no way you can’t say these are not helping humanity in their own way.
AI will not solve the listed issues on its own. AI as a concept is a tool that will help, but it will always end up on how well its used and with what other tools.
Also, saying AI will ruin humanity’s existence or bring “disempowerment” of the species is a completely awful view that has no way of happening just simply due to the fact that doing so its not profitable.
saying AI will ruin humanity’s existence or bring “disempowerment” of the species is a completely awful view that has no way of happening just simply due to the fact that its not profitable.
The economic incentives to churn out the next powerful beast as quickly as possible are obvious.
Making it safe costs extra, so that’s gonna be a neglected concern for the same reason.
We also notice the resulting AIs are being studied after they are released, with sometimes surprising emergent capabilities.
So you would be right if we would approach the topic with a rational overhead view, but we don’t.
It will, and is helping humanity in different fields already.
We need to diverge PR speech from reality. AI is already being used in pharmaceutical fields, aviation, tracking (of the air, of the ground, of the rains…), production… And there is absolutely no way you can’t say these are not helping humanity in their own way.
AI will not solve the listed issues on its own. AI as a concept is a tool that will help, but it will always end up on how well its used and with what other tools.
Also, saying AI will ruin humanity’s existence or bring “disempowerment” of the species is a completely awful view that has no way of happening just simply due to the fact that doing so its not profitable.
The economic incentives to churn out the next powerful beast as quickly as possible are obvious.
Making it safe costs extra, so that’s gonna be a neglected concern for the same reason.
We also notice the resulting AIs are being studied after they are released, with sometimes surprising emergent capabilities.
So you would be right if we would approach the topic with a rational overhead view, but we don’t.