There’s a vocal group of people who seem to think that LLMs can achieve consciousness despite the fact that it is not possible due to the way that LLMs fundamentally work. They have largely been duped by advanced LLMs’ ability to sound convincing (as well as a certain conman executive officer). These people often also seem to believe that by dedicating more and more resources to running these models, they will achieve actual general intelligence and that an AGI can save the world, releasing them of the responsibility to attempt to fix anything.
That’s my point. AGI isn’t going to save us and LLMs (by themselves), regardless of how much energy is pumped into them, will not ever achieve actual intelligence.
An ICE vehicle left in a garage when petrol in it will have significant issues after a time. The fuel will oxidize and turn to varnish, ruining the fuel pump and valves. Repair can be quite expensive, depending on how thoroughly gummed up things get.