Longtermism poses a real threat to humanity
https://www.newstatesman.com/ideas/2023/08/longtermism-threat-humanity
“AI researchers such as Timnit Gebru affirm that longtermism is everywhere in Silicon Valley. The current race to create advanced AI by companies like OpenAI and DeepMind is driven in part by the longtermist ideology. Longtermists believe that if we create a “friendly” AI, it will solve all our problems and usher in a utopia, but if the AI is “misaligned”, it will destroy humanity…”
deleted by creator
The “rush” to produce AI is a problem with Capitalism, not longtermarism. There is a rush to create the first generative AI not because it will benefit society but because it will make buttloads of money.
This. I came here to say right this.
I’m inclined to agree. Even without having read McAskill’s book (though I’m interested to read it now), the way we’re approaching AI right now does seem more of a short-term gold rush, wrapped up in friendly sounding “this will lead to utopia in the future” to justify “we’re going to make a lot of money right now”.
And here I thought short-termism was bad, where companies focus only on quarterly financial returns.
Longtermism is a cardboard halo. A thin excuse to act in complete self-interest while pretending it is good for humanity.
The further into the future we try to think, the more different factors and uncertainty dominate. This leaves you room to put in any argument you feel like, to make any prediction you feel like. So you pick something vaguely romantic or appealing to some relatively popular opinion, and hey you’re golden.
I am approached by a beggar. What do I - the longterminist - do?
I feel like being kind today. My longterminist argument is that every bit of happiness and relief today carries compound interest into the future, and by giving this person some money today, they are content and don’t have to resort to thievery, which again makes another person have a safe day and have mental energy to do a lot of good tomorrow. The goodness becomes bigger every step, over time. I give them $100. It’s pretty obvious, really.
They smell and I don’t want to deal with that right now. My longterminist argument is that helping out beggars actually just perpetuates a problem in society. If people can’t function in society without random help, it’s just a ticking bomb of a humanitarian disaster. Giving them money just postpones the time until the crisis is too big to ignore, and allows it to grow further. No, this is a problem that society needs to handle right now, and by giving money to this person I’m just helping the problem stay hidden. I ignore them and walk on by. It’s pretty obvious, really.
My wife left me and I want other people to hurt like I do. My longterminist argument is that unfortunately, these people are rejects of society and I can’t fix that. But we can prevent them from harassing productive citizens that work hard to create a better future. If fewer beggars make commuters sad and it gives a 1% improvement in productivity, that’s a huge compound improvement in a few hundred years. So I kick him in the leg, yell at him, and call the police on him and say he tried to assault me. It’s a bit cold-hearted, but it’s obviously good long term.
That’s not much different from any other belief system, as far as I can tell. Shitty people will justify their actions no matter what. Look at all the atrocities that have been performed in the name of Christianity, Islam, or even plain nationalism. A threat to humanity it does not constitute, at least IMO
Completely agree.
It’s just a popular quasi-religion for rich people to keep doing what they do while coming off as megabrain angels.
Why? The author explains what longtermism is, at least their perspective of it, but then makes the jump to ‘in conclusion, it’s bad for humanity’ without ever quite touching on WHY it’s bad (having a few key followers with questionable ethics is insufficient, you can do that about ANY major belief)
Probably because it ignores issues that are relevant right now in favor of some theoretical distant future which will probably never pan out.
How so? To plan for the future requires that you survive the present. I doubt anybody is saying ‘screw global warming, I’ll be fine in a cpu’.
I doubt anybody is saying ‘screw global warming, I’ll be fine in a cpu.
You’d be surprised what the tech billionaires are saying right now. They are definitely not tackling the problems of today, but are creating new ones by the minute.
A major problem with longterminism is that it presumes to speak for future people who are entirely theoretical, who’s needs are entirely impossible to accurately predict. It also depriorites immediate problems.
So Elon Musk is associated with Longterminism (self proclaimed). He might consider that interplanetary travel is in best interest of mankind in the future (Reasonable). As a longtermist he would then feel a moral responsibility to advance interplanetary travel technology. So far, so good.
But the sitch is that he might feel that the moral responsibility to advance space travel via funding his rocket company is far more important that his moral responsibility to safeguard the well being of his employees by not overworking them.
I mean after all yeah it might ruin the personal lives and of a hundred, two hundred, even a thousand people, but what’s that compared to the benefit advancing this technology will bring to all mankind? There are going to be billions of people befitting from this in the future!
But that’s not really true. Because we can’t be certain that those billions of people will even exist let alone benefit. But the people suffering at his rocket company absolutely do exist and their suffering is not theoretical.
The greatest criticism of this line of thought is that it gives people, or at the moment, billionaires permission to do whatever the fuck they want.
Sure flying on a private jet is ruinous to the environment but I need to do it so I can manage my company which will create an AI that will make everything better…
That’s a fair criticism. But how is that a threat to humanity?
Because it gives powerful people permission to do whatever they want, everyone else be damned.
Both of the two major Longtermist philophers casually dismiss climate change in their books for example (I have Toby Ord’s book which is apparently basically the same as William Mckaskils book but first and better, supposedly). As if it’s something that can be just solved by technology in the near future. But what if it isn’t?
What if we don’t come up with fusion power or something and solving climate change requires actual sacrifices that had to be made 50 years before we figured out fusion isn’t going to work out. What if the biosphere actually collapses and we can’t stop it. That’s a solid threat to humanity.
No, it gives them a justification to do so. But is that actually any different from any other belief system? Powerful assholes have always justified their actions using whatever was convenient, be it religion or otherwise. What makes longtermism worse, to the extent it’s a threat to humanity when everything else isn’t?
Along the lines of @AnonStoleMyPants – the trouble with longtermism and effective altruism generally is that, unlike more established religion, it’s become en vogue specifically amongst the billionaire class, specifically because it’s essentially just a permission structure for them to hoard stacks of cash and prioritize the hypothetical needs of their preferred utopian vision of the future over the actual needs of the present. Religions tend to have a mechanism (tithing, zakat, mitzvah, dana, etc.) for redistributing wealth from the well-off members of the faith towards the needy in an immediate way. Said mechanism may often be suborned by the religious elite or unenforced by some sects, but at least it’s there.
Unlike those religions, effective altruism specifically encourages wealthy people to keep their wealth to themselves, so that they can use their billionaire galaxy brains to more effectively direct that capital towards long-term good. If, as they see it, Mars colonies tomorrow will help more people than healthcare or UBI or solar farms will today, then they have not just a desire, but a moral obligation to spend their money designing Mars rockets instead of paying more taxes or building green infrastructure. And if having a longtermist in charge of said Mars colony will more effectively safeguard the future of those colonists, then by golly, they have a moral obligation to become the autocratic monarch of Mars! All the dirty poors desperate for help today aren’t worth the resources relative to the net good possible by securing that utopian future they imagine.
effective altruism specifically encourages wealthy people to keep their wealth to themselves, so that they can use their billionaire galaxy brains to more effectively direct that capital towards long-term good.
And how is that a bad thing? The alternative is to spend money on stuff that doesn’t work or even is actively harmful. The argument here is literally use less brain, do more pointless feelgood measures.
If, as they see it, Mars colonies tomorrow will help more people than healthcare or UBI or solar farms will today
Are we forgetting that Musk has an electric car company and used to have a solar company (since been absorbed into Tesla). He doesn’t just want to go to Mars, he does a lot of other stuff as well. Also why should billionaires be responsible for UBI and healthcare? If Musk would spend all his money on healthcare, you’d have healthcare for about three months before he is bankrupt. That kind of stuff is the governments job.
Don’t think so personally. The only reason might be that tech billionaires probably think it is more “their thing” than religion or whatever. Hence, quite bad.
The greatest criticism of this line of thought is that it gives people, or at the moment, billionaires permission to do whatever the fuck they want.
Rich people have been doing whatever the fuck they want for thousands of years. Musk at least tries to build a cool big spaceship while doing so. I don’t really see the problem with that.
How is giving rich people a reason to do good and having some long term vision a bad thing? We didn’t get climate change because people were looking too far ahead, we got it because it was cheap energy and people made a lot of money with it.
This whole article reeks of short term thinking. Do whatever feels good in the moment, don’t care about the consequences.
You need to listen to Tech Won’t Save Us. Paris Marx interviews several guests who describe in detail the issues with longtermism.
The comparison to “ANY major belief” is wildly flawed and I see you keep doing that in every single response in this thread.
Ok, why IS the comparison wildly flawed?
Been reading too much Asimov, I see.
They fr read Foundation and missed the fact that Hari Seldon’s preditctions fell apart very early on and he had the benefit of magical foresight.
These people read the cliffnotes and take everything at face value. Reasoning on material does not get you clicks.
Timnit is just roleplaying science fiction. LLMs are so far away from that it’s not even conceivable right now. We haven’t even figured out accuracy checking in LLMs.
This article makes me think of two great, classic anime series: Ghost in the Shell and Serial Experiments Lain.
I’ve never heard of this. How is it linked to transhumanism? Is it a re-branding? A fork? An attempted to propose a moral stance to transhumanism? Unless they are two rival theories to think the future?
(I’m not a transhumanist)