Lots of discussion on the orange site post about this today.
(I mentioned this in the other sneerclub thread on the topic but reposted it here since this seems to be the more active discussion zone for the topic.)
I should probably mention that this person went on to write other comments in the same thread, revealing that they’re still heavily influenced by Bay ?Area rationalism (or what one other commenter brilliantly called “ritual multiplication”).
The story has now hopped to the orange site. I was expecting a shit-show, but there have been a few insightful comments from critics of the rationalists. This one from “rachofsunshine” for instance:
[Former member of that world, roommates with one of Ziz’s friends for a while, so I feel reasonably qualified to speak on this.]
The problem with rationalists/EA as a group has never been the rationality, but the people practicing it and the cultural norms they endorse as a community.
As relevant here:
While following logical threads to their conclusions is a useful exercise, each logical step often involves some degree of rounding or unknown-unknowns. A -> B and B -> C means A -> C in a formal sense, but A -almostcertainly-> B and B -almostcertainly-> C does not mean A -almostcertainly-> C. Rationalists, by tending to overly formalist approaches, tend to lose the thread of the messiness of the real world and follow these lossy implications as though they are lossless. That leads to…
Precision errors in utility calculations that are numerically-unstable. Any small chance of harm times infinity equals infinity. This framing shows up a lot in the context of AI risk, but it works in other settings too: infinity times a speck of dust in your eye >>> 1 times murder, so murder is “justified” to prevent a speck of dust in the eye of eternity. When the thing you’re trying to create is infinitely good or the thing you’re trying to prevent is infinitely bad, anything is justified to bring it about/prevent it respectively.
Its leadership - or some of it, anyway - is extremely egotistical and borderline cult-like to begin with. I think even people who like e.g. Eliezer would agree that he is not a humble man by any stretch of the imagination (the guy makes Neil deGrasse Tyson look like a monk). They have, in the past, responded to criticism with statements to the effect of “anyone who would criticize us for any reason is a bad person who is lying to cause us harm”. That kind of framing can’t help but get culty.
The nature of being a “freethinker” is that you’re at the mercy of your own neural circuitry. If there is a feedback loop in your brain, you’ll get stuck in it, because there’s no external “drag” or forcing functions to pull you back to reality. That can lead you to be a genius who sees what others cannot. It can also lead you into schizophrenia really easily. So you’ve got a culty environment that is particularly susceptible to internally-consistent madness, and finally:
It’s a bunch of very weird people who have nowhere else they feel at home. I totally get this. I’d never felt like I was in a room with people so like me, and ripping myself away from that world was not easy. (There’s some folks down the thread wondering why trans people are overrepresented in this particular group: well, take your standard weird nerd, and then make two-thirds of the world hate your guts more than anything else, you might be pretty vulnerable to whoever will give you the time of day, too.)
TLDR: isolation, very strong in-group defenses, logical “doctrine” that is formally valid and leaks in hard-to-notice ways, apocalyptic utility-scale, and being a very appealing environment for the kind of person who goes super nuts -> pretty much perfect conditions for a cult. Or multiple cults, really. Ziz’s group is only one of several.
As someone who also went to university in the late 80s and early 90s, I didn’t share his experiences. This reads like one of those silly shaggy-dog stories where everyone says sarcastically afterwards: “yeah that happened”.
Damn. I thought I was cynical, but nowhere near as cynical as OpenAI is, apparently.
One thing to keep in mind about Ptacek is that he will die on the stupidest of hills. Back when Y Combinator president Garry Tan tweeted that members of the San Francisco board of supervisors should be killed, Ptacek defended him to the extent that the mouth-breathers on HN even turned on him.
Same. I’m not being critical of lab-grown meat. I think it’s a great idea.
But the pattern of things he’s got an opinion on suggests a familiarity with rationalist/EA/accelerationist/TPOT ideas.
Do you have a link? I’m interested. (Also, I see you posted something similar a couple hours before I did. Sorry I missed that!)
So it turns out the healthcare assassin has some… boutique… views. (Yeah, I know, shocker.) Things he seems to be into:
How soon until someone finds his LessWrong profile?
As anyone who’s been paying attention already knows, LLMs are merely mimics that provide the “illusion of understanding”.
As a longtime listener to Tech Won’t Save Us, I was pleasantly surprised by my phone’s notification about this week’s episode. David was charming and interesting in equal measure. I mostly knew Jack Dorsey as the absentee CEO of Twitter who let the site stagnate under his watch, but there were a lot of little details about his moderation-phobia and fash-adjacency that I wasn’t aware of.
By the way, I highly recommend the podcast to the TechTakes crowd. They cover many of the same topics from a similar perspective.
For me it gives off huge Dr. Evil vibes.
If you ever get tired of searching for pics, you could always go the lazy route and fall back on AI-generated images. But then you’d have to accept the reality that in few years your posts would have the analog of a geocities webring stamped on them.
Please touch grass.
The next AI winter can’t come too soon. They’re spinning up coal-fired power plants to supply the energy required to build these LLMs.
I’ve been using DigitalOcean for years as a personal VPS box, and I’ve had no complaints. Not sure how well they’d scale (in terms of cost) for a site like this.
Anthropic’s Claude confidently and incorrectly diagnoses brain cancer based on an MRI.
Strange man posts strange thing.
This linked interview of Brian Merchant by Adam Conover is great. I highly recommend watching the whole thing.
For example, here is Adam, decribing the actual reasons why striking writers were concerned about AI, followed by Brian explaining how Sam Altman et al hype up the existential risk they themselves claim to be creating, just so they can sell themselves as the solution. Lots of really edifying stuff in this interview.
The Pioneer Fund (now the Human Diversity Foundation) has been funding this bullshit for years, Yud.