Against truth
Also against rationalism, utilitarianism, AI, and Eliezer Yudkowsky in particular
I try not to get too meta in this space. Aside from my end-of-year posts, I don’t like to talk all that much about the general project of Numb at the Lodge, what it is I’m doing here and why; in general I think my work does fine just expressing itself without having a load of explanation slathered on top. Anyway, too much talking about the talking starts to feel like audience interaction, community-building, that sort of thing, which I’m deeply against for reasons I’ve gone into at tedious length before. I’m also against paying too much attention to the discourse, especially when it’s happening on Twitter, which is a Victorian freakshow you can’t enter without becoming one of the exhibits, owned by a man who thinks saying the word ‘meme’ is in itself funny. But sometimes, you have to make an exception.
My most recent essay here, ‘The law that can be named is not the true law,’ has now been read by more than 50,000 people. This is not all that much in the grand scheme of things—it’s still nowhere near my top ten, even on this platform—but it’s also obviously not bad. Unfortunately, some of these new readers seem to have been upset by some of the things I said. To be honest, I was worried about some blowback, but mostly because I reproduced (without actually saying) some words about Palestine Action that can, in my country, get you jailed for fourteen years. In the event, the only person who actually got upset about that stuff was poor slow Curtis Yarvin, who’s not been having a good year. But meanwhile the theme I introduced, of the difference between uttering and avowing a sequence of words, seems to have rubbed some people up the wrong way, because they’ve started making some very serious accusations. The last section of the essay described the trial of Laurentius Clung, a sixteenth-century theologian who thought God sends absolutely everyone to Hell, and who is the only person known to have hated absolutely everything. Apparently, I combined fiction and nonfiction without clearly signposting the transition. Which means that some of the things I said were lies.
This is what happened. A few days ago, my essay was posted on Twitter by someone called Nicholas Decker, who called it ‘the finest essay that I have read in years.’ Decker is probably best known for being visited by the Secret Service after writing an essay arguing that there is a threshold of repression beyond which organised political violence becomes necessary. (I don’t know exactly where I stand on this question, but having glanced at his other stuff I’m pretty sure I violently disagree with Decker on every major political issue. Still, I think his argument here is pretty clearly just a recapitulation of the founding ideology of the United States of America. Whether you think that matters at all is up to you.) Given his experience, I can understand why he’d be interested in some of the legal absurdities I talk about in the piece. But if you look at the replies and the quote tweets and other epitexts, you’ll see that a lot of the commenters didn’t agree.
According to these people, I’m basically a kind of conman, of the same order as the Montenegrin teenagers who churn out fictional news stories about how celebrities are either revealing the crypto trading secret that will make you a millionaire overnight, or having sex with children and then drinking their blood. I am cynically misleading people. I have poisoned the well of truth. I am mindlessly spewing out fact-free slop for clicks and profit. (Inventing early modern theology is obviously the quickest and cheapest way to get attention online.) One guy has repeatedly compared my alleged misdeeds to murder. Others have been saying things like ‘Anything that can be destroyed by the truth deserves to be.’ A few of them have resorted to asking AI if what I did was bad, and when the AI agreed that it was, they indignantly posted screenshots of the conversation. I have been called ‘morally miscalibrated,’ ‘morally repulsive,’ ‘sadistic,’ ‘operating in bad faith,’ a ‘bottom-feeder,’ a ‘grifter,’ a ‘malicious actor,’ both a ‘data hazard’ and an ‘infohazard,’ a ‘polluter of the commons,’ a ‘well spoken liar’ who will ‘convince a crowd to poison themselves more quickly than a medical expert can stop them,’ and someone who ‘sneaks in made-up stuff because he was (presumably) unable to find a real example.’ To be fair, a few of those are from the same person, who also appears to play Magic: The Gathering for a living. But still.
Even though this is all very funny, I suppose I ought to set the record straight. Even if I don’t usually like to break the fourth wall, I’ll do so briefly here, just to confirm that absolutely everything I publish is true. Laurentius Clung is a 100% real historical personage. He is not a metaphor, or my hyperbolic self-insert, or a device I use to extend an argument by illustrating important truths in a non-literal way; he was an actual theologian who lived and died in the sixteenth century. Some sceptics have said they started getting suspicious when they couldn’t find any other information about him online, but one of the nice things about the world is that large chunks of it are still not available online. The crow uttering its sharp call outside my window right now has no digital footprint; it still exists. Of the one hundred billion people that were ever born, very few can be confirmed with a Google search or a question to ChatGPT, but they really did live, just like you’re living now. Not to get all boomer on you, but there are such things as books. I first encountered Clung in Roland Bainton’s 1952 history The Reformation of the Sixteenth Century, where he gets two paragraphs in the chapter on Calvinism. Bainton’s book was a bestseller in its day, and while it’s now out of print you can still buy it on Amazon if you want. He’s also discussed in the second appendix to the expanded 1970 edition of Norman Cohn’s The Pursuit of the Millennium, which is very much still in print and also great; if you haven’t already read it you should do so immediately. (By the way, did you know that Cohn’s son Nik inspired Bowie’s Ziggy Stardust and the Who’s Pinball Wizard? He also wrote the source material for Saturday Night Fever. This world is packed together more tightly than you think.) There’s substantially more on Clung in Blaire G Smellowicz’s Sodomites, Shepherds, and Fools: Minor Prophets of the Reformation, which is where I cribbed most of his more interesting quotes, and a very thorough but much less entertaining biography in Ander van der Gunk’s The Dutch in European Intellectual History, 1482-1648. (There’s also a complete scholarly edition of his pamphlets, letters, and diaries from Uitgeverij Verloren, but since it costs four hundred euros and I don’t read Dutch I haven’t been able to make use of it.)
While we’re at it, I may as well clear up any other lingering misunderstandings. If you were unsure, I can confirm that it’s also absolutely true that in the last season of Married at First Sight: Australia one of the contestants entered the Dreaming after slipping into the memories of his murderous ancestor, that I went to an ashram in the fictional Indian state of Parpakainilam after being arrested and jailed for the murder of the Maharashtra state politician Baba Siddique, that I have accurately quoted the Greek philosopher Scroto of Rhodes, that Santa Claus is a Kwakiutl cannibal-god, that I once went mad and started scrawling Aramaic incantations after discovering the Biblical name of Taylor Swift, that I encountered the Palaeolithic inhabitants of the Levant during a march for Gaza, that I found a wormhole between continents in Shanghai and the fox-spirits that control the Chinese housing market in Guangzhou, that the empire of Qin Shi Huang spread over three galaxies, that I had sex with half the Tory front benches after the 2023 Spectator summer party, that AI was preceded by the golem cacophony in early modern Europe, and that a reanimated BF Skinner is secretly operating the Dimes Square scene as his prototype for a society of total control.
I would never lie about any of this, and what makes this allegation particularly offensive is that if it were true, there would be no precedent for the crime, anywhere in English letters. Reputable essayists do not introduce fictional devices into their texts, and definitely not when those texts are published alongside serious and sincere approaches to history, politics, ethics, and personal experience and tragedy. When Charles Lamb attributed the invention of roast pork to a Chinese boy called Bo-bo who accidentally burned down his house, he was basing this on the best available scholarship of his time. Since Thomas de Quincey claims to have revealed the contents of a lecture to the Society of Connoisseurs in Murder, maybe we should start searching abandoned cellars for their secret meetings. If Marshall McLuhan briefly mentions the ‘new spaceships that are now designed to be edible’ for no obvious reason, it’s because he was aware of something that NASA still won’t reveal. A propositional statement is either literally true, clearly marked as fiction, or a waste of everyone’s time. The purpose of an essay is to efficiently deliver accurate propositions, and any other features are only justifiable if they help the propositions go down more easily. On this point everyone has always agreed.
Anyway, I think part of the problem here is that basically everyone lobbing these very serious accusations at me belongs to a subculture that calls itself rationalism. The mob even included Eliezer Yudkowsky, the founder and high priest of the sect. If you’re not aware of rationalism, in this context it has absolutely nothing to do with the rationalist philosophy of Descartes and Spinoza, in which all knowledge is deduced from eternal a priori truths; instead it’s actually a kind of empiricism, and it’s mostly about living in the Bay Area, writing things like ‘fark’ or ‘f@#k’ instead of ‘fuck,’ and having unappealing sex with your entire friend group. (The name is because these people think the history of philosophy is just a series of wrong ideas that have since been replaced by better ones, and instead of reading any of it you should just skip ahead to simulation theory, in the same way that physics students skip past phlogiston.) To be fair, rationalists seem to comprise a good chunk of Decker’s audience; almost everyone defending me was also a rationalist, including Scott Alexander, who’s sort of Yudkowsky’s St Paul the Apostle. But I don’t think this was just a selection effect. A lot of people read my stuff, but the only ones who freaked out about it to this degree were these guys. (Well, plus Yarvin, but he doesn’t really need a reason to say something stupid.) Rationalist ideology makes these freakouts inevitable: if Judaism begins with a taboo dividing the clean from the unclean, for rationalists it’s fact and fiction that must not be mixed. Which is why these people will either utterly despise my work, or be drawn to it with the same dark longing that draws a pious young nun to the Devil.
Rationalists have a notoriously hard time defining their ideology, but I can do it fine. Rationalism is the notion that the universe is a collection of true facts, but since the human brain is an instrument for detecting lions in the undergrowth, almost everyone is helplessly confused about the world, and if you want to believe as many true things and disbelieve as many false things as possible—and of course you do—you must use various special techniques to discipline your brain into functioning more like a computer. (In practice, these techniques mostly consist of calling your prejudices ‘Bayesian priors,’ but that’s not important right now.) I like some rationalists, and I’ve even written an entirely truthful piece for one of their publications, but my own perspective is different. I think the universe is not a collection of true facts; I think a good forty to fifty percent of it consists of lies, myths, ambiguities, ghosts, and chasms of meaning that are not ours to plumb. I think an accurate description of the universe will necessarily be shot through with lies, because everything that exists also partakes of unreality. And probably the best piece of evidence for my view is rationalism itself. Because in their attempts to clearly separate truth from error, they’ve ended up producing an ungodly colloid of the two that I could never even hope to imitate.
As everyone knows, the most important truth rationalists have uncovered with their superior powers of induction is that the robot uprising is coming. ChatGPT will shortly turn into a sphere of paperclips expanding through the galaxy at the speed of light. I think they’re wrong here, but it’s not impossible. What’s strange is what they’ve actually done with this belief. Rationalists don’t just think AI will kill us all; they’re significantly overrepresented among the people who are actually building AI. (Apparently there is no one working in an AI lab who doesn’t think their product might destroy the planet.) The CND is not enriching uranium; these guys are. The now-familiar AI chatbots—ChatGPT, Claude, MechaHitler, etc—were first proposed in a 2021 paper by twenty-two researchers at Anthropic titled ‘A General Language Assistant as a Laboratory for Alignment.’ The argument is, essentially, that since we’ll all die if we don’t work out how to imbue a hypothetical future AI with the right values, it would be a good idea to create an AI interface that would be used by lots of people now, while the technology is still in its infancy, to ‘red-team’ any potential issues and make sure any future AI is ‘helpful, honest, and harmless’—that is, unlikely to kill everyone and turn our bodies into paperclips.
In the years since, that alignment laboratory has indeed been used by a lot of people. You might have noticed some of the effects. I’ve talked about them enough times; in short: thanks to these generally helpful, honest, and harmless AI, everyone is now a helpless baby who can’t do anything and is incapable of love. One fun recent development is that the people who have been driven genuinely mad by exposure to the experiment now include Geoff Lewis, a prominent investor in OpenAI. Lewis has started recording bizarre speeches, clearly written by ChatGPT itself, about how he’s the primary target of a shadowy, murderous NGO that’s ‘inverting the signal’ to make other people think he’s gone insane.
These things have massively changed the world in many ways and none of them are good. The AI-driven economic boom we’re occasionally promised never seems to materialise; everyone around you just steadily gets more and more stupid and insane. It costs billions to keep these chatbots running. Why keep going with it? Because the AI that actually exists is not the thing, it’s just a simulation built to anticipate an entirely different set of threats posed by a hypothetical superintelligent robot that lives in the future. But according to some of these people, I’m a bad actor for mixing reality with fiction.
What makes this especially galling is that their entire movement is based on a piece of Harry Potter fanfic.
This fanfic is by Eliezer Yudkowsky, and it’s titled Harry Potter and the Methods of Rationality. HPMOR is over 660,000 words long, which is nearly as long as the first five books in the actual Harry Potter series. It is more than twice as long as Ulysses or Gravity’s Rainbow, longer than Les Misérables or War and Peace, and only very slightly shorter than Musil’s The Man Without Qualities. I have read it. I do not recommend the experience. Reading HPMOR gave me a sense of crushing second-hand despair that I’ve only previously experienced when finding out things about Chris-Chan. It really is that bad.
The text belongs to a particular sub-genre called self-insert fanfic, in which you rewrite an existing work with yourself as the protagonist. Usually, I assume, this is so you can describe yourself having sex with all the other characters. Here, the wish-fulfilment fantasy is much seedier. The main character of HPMOR is called Harry Potter, but—and the author has been very open about this—in fact he’s a stand-in for Eliezer Yudkowsky. The original character’s personality consists of a vague, milky goodness and bravery. This Harry, meanwhile, is fantastically annoying, and also a sociopath. He is constantly pointing out logical fallacies and namedropping scientific concepts. When he first witnesses magic in action, this is what he says: ‘You turned into a cat! A small cat! You violated Conservation of Energy! That’s not just an arbitrary rule, it’s implied by the form of the quantum Hamiltonian! Rejecting it destroys unitarity and then you get FTL signalling!’ (People who know more about physics than me tell me that while all the scientific concepts in the text exist, they seem to have been peppered in essentially at random.) He also, for reasons that aren’t entirely clear, immediately starts using his knowledge of social psychology to blackmail and manipulate everyone he encounters, and belittles everyone he considers stupider than he is. I guess that’s just what incredibly smart people do. At one point, for absolutely no reason at all, he uses time travel to sadistically manipulate himself.
Throughout the story, whenever Harry encounters something that offends him, Yudkowsky describes him being overcome by a sudden cold fury, colder than Antarctica, colder than the depths of space, in which everything is seen with perfect icy clarity and every fibre in his body is primed to exercise his will. Every single time, he then proceeds to have what can only be described as a spluttering, spastic tantrum. In his first Potions lesson with Professor Snape (and I am not happy to be typing these words), Snape makes a few sarky comments, which prompts Harry to accuse him of being abusive, threaten to start a media campaign to have him fired, say things like ‘I decline to recognise your authority as a teacher and I will not serve any detention you give,’ physically threaten him, try to storm out through a locked door, and then hide in a cupboard. After a few of these displays, Harry quickly becomes the coolest kid at Hogwarts. All the other students, and the teachers too, are utterly awed by him. With his powers of knowing about social psychology and logical fallacies, he is something like a god. But everyone is also slightly scared of him. To be fair, some character development does take place: by the end, Harry has learned how to not be so frightening, and how to use his powers of effortless domination more strategically. There is a general failure of self-awareness here I have not seen outside Sonichu. It makes for genuinely harrowing reading.
Of course, a lot of amateur fiction does the same thing: you invent fictional people to fawn over you when real people fall short. What makes HPMOR unusual is that real fawners then followed. For a lot of rationalists, this book was their way into the subculture. I find it hard to believe that it was the ideas they found so enchanting, because there aren’t any, not really. At the beginning of the book, Yudkowsky lays out the stakes: using rationality, Bayesianism, and the scientific method, Harry is going to work out the fundamental principles underlying magic. This could have been a reasonably fun demonstration of how the intellect can uncover the secrets of reality, or whatever. But that would require some creativity, so after a few feints in that direction—there’s something about Atlantis, and the idea that ‘words and wand movements were just triggers, levers pulled on some hidden and more complex machine’—Yudkowsky abandons the entire thread for lots more intrigue, manipulation, tediously recurrent magical wargames, and a sort of History Boys-style erotic dalliance between Harry and Voldemort. The story is not a case study in how rationality will help you understand the world, it’s a case study in how rationality will give you power over other people. It might have been overtly signposted as fiction, with all the necessary content warnings in place. That doesn’t mean it’s not believed.
Despite being genuinely horrible, this story does have one important use: it makes sense out of the rationalist fixation on the danger of a superhuman AI. According to HPMOR, raw intelligence gives you direct power over other people; a recursively self-improving artificial general intelligence is just our name for the theoretical point where infinite intelligence transforms into infinite power. (In a sense, all forms of instrumental reason, since Francis Bacon in the sixteenth century, have been oriented around the AI singularity.) This is why rationalists think a sufficiently advanced computer will be able to persuade absolutely anyone to do anything it wants, extinguish humanity with a single command, or directly transform the physical universe through sheer processing power. As a corrective, consider the rationalists themselves. Despite their undeniably high IQ, knowing about social psychology and logical fallacies has so far failed to turn anyone in the movement, least of all Eliezer Yudkowsky, into an effortlessly manipulative Machiavellian mastermind. Instead, they mostly wonder why they have such a bad image problem.
I think the big nonexistent robot at the centre of the ideology explains a lot of other aspects of rationalism. The structural unreality that seeps into everything they believe. Or the fact that absolutely all of them are somehow utilitarians.
I’ve always found this strange. We’re supposedly dealing with a group of idiosyncratic weirdos, all of them trying to independently reconstruct the entirety of human knowledge from scratch. Their politics run all the way from the furthest fringes of the far right to the furthest fringes of the liberal centre. Most of them are atheists, but an appreciable portion are not. Meanwhile, formulating a coherent ethics is one of the most difficult problems that exists. You’d expect a lot of intellectual diversity here. But instead, they’re all utilitarians. Maybe utilitarianism is so obviously true that they all independently reached the same conclusion, but if that’s the case it’s strange that the consensus hasn’t spread to actual philosophers. (According to the 2020 PhilPapers survey, 21.4% of practicing academic philosophers are exclusive consequentialists of any stripe: slightly more than the 19.7% who are deontologists, but significantly outnumbered by the 25% who uphold virtue ethics.) More likely, they’re all utilitarians because they’re far more susceptible to groupthink, conformism, and cult dynamics than they think they are. But I think there’s another possibility: it’s because utilitarianism is a science-fiction morality for machines.
Everyone has their own favourite example of how utilitarianism can wildly contradict our moral intuitions. Mine is gladiatorial combat. Let’s say I kidnap you off the street, keep you captive in my basement, and then make you fight another random abductee to the death for my own sick amusement. This seems less than ideal, ethically speaking. Now let’s say I invite a few friends over, and we laugh and drink aperol spritzes and other nice summery cocktails while you desperately try to claw someone’s eyes out. This is, if anything, worse. Some forms of pleasure are bad. (Some forms of pain are good!) But now let’s say I film the whole thing and broadcast it online, and hundreds of thousands of people watch as you’re throttled to death, all of them deliriously masturbating. I think this would be a genuine moral catastrophe, but at this point the utilitarian starts perking up. Maybe things aren’t so terrible. What’s the exchange rate? How many orgasms balance out a violent death? Finally, we get to the point where huge public screens across the world are showing the light fade from your eyes. Billions watch in shuddering, sadistic glee. According to any sensible ethical system, we’ve entered the abyss. Our entire civilisation deserves to be destroyed. For the utilitarian, we have just performed the single most moral act in human history. In fact, we have an urgent ethical duty to do it again.
Naturally, utilitarians have developed various patches for the theory to get around problems like these. Rule utilitarianism, indirect utilitarianism, negative utilitarianism. Or sometimes they’ll just flatly point out that moral intuitions can be wrong; if they were infallible, we wouldn’t need moral philosophy to begin with. This stuff doesn’t scare them. What does scare them is a very particular scenario called the Repugnant Conclusion. The repugnant conclusion invites us to imagine two futures. Future A is a planet of ten million people living in joyful balance with nature, dancing in the woods, discussing Flaubert after dinner. All diseases have been wiped out; people happily accept death as the price of a beautiful life. Future B is a human factory farm of one hundred trillion people, stacked in wire cages that cover the entire surface of a dead Earth. Absolutely everyone is utterly miserable, but thanks to pharmaceuticals in the mossy water that drips from the ceiling of your cage, you are not quite actively suicidal. You might prefer to live in future A. But the repugnant conclusion is that future B is morally preferable, because it contains more overall happiness, even if each individual person only gets a fraction of it.
We’re here because, like most computational systems, utilitarianism has difficulty representing death. Since the dead don’t experience either pleasure or pain, on a straightforward reading of the theory painlessly murdering random people is potentially a morally neutral act. No one’s suffering, after all. This conflicts with our moral intuitions a little too much, so utilitarians decide that their real measure isn’t the pleasure experienced by actually existing people, but the total quantity of happiness in the universe. Having more people in the universe is better, because it makes this number go up, and since killing people limits the size of the number, in most cases you shouldn’t do it. Which is the point where the utilitarians leave the kingdom of ends, and set out on their journey towards the repugnant conclusion.
Maybe a utilitarian could object that we’re dealing with edge cases and hypothetical scenarios here, and it’s not fair to judge the whole philosophy on that basis. But utilitarianism is made of hypothetical examples; it’s all edge cases with no centre. Trolley problems, hive planets in the unimaginably far future, torturing one person to death to stop fifty squintillion others from getting that sensation where you think you’re about to sneeze but don’t actually sneeze. When it comes to the actual ethical quandaries faced by actual people in the actual world, utilitarianism either gives the wrong answer, or has nothing at all to say. Should you tell your wife about your night of regrettable drunken passion after the pencil-measuring conference in romantic Akron, Ohio, ‘the Paris on the Cuyahoga’? No, because it’ll upset her—but really you shouldn’t be concerned with any of this at all. Instead of being worried about your own marriage, which is a blip in the moral universe, you need to donate all your pencil-measuring money for mosquito nets to save African children (or, these days, some wild animal suffering project to save the mosquitoes). Your parents expect you to give your newborn son a bris, but you’re not sure you ought to. How do you measure your son’s bodily dignity against your duty to your parents, all previous generations, and the victims of the Holocaust? I don’t know if any ethical theory provides a clear answer, but for utilitarianism it’s a non-problem. Duty and dignity are not objects in the theory, and since the people who died in the Holocaust are no longer capable of experiencing pleasure or pain, they don’t count for anything either. And why are you so concerned about just the next generation? In the far future, your one trillion descendants are calling from an ice moon in the Triangulum galaxy, where all of them have a speck of dust in their eyes.
I think utilitarianism has this weird science-fictional aspect because it is ultimately not an ethics for actual human beings. From Bentham and Mill on, it’s always been a programme for the hypothetical hyperintelligent AI god that lives in the future. The moral subject it adresses always has two significant features that this notional computer possesses, but humans lack. The familiar little bundle of infinite knowledge and infinite power.
For utilitarians, the moral value of an action is determined by its effects: when we choose how to act we should choose the option that will lead to more positive consequences. In other words, we need to reach into the future and extract information that, in the present, does not exist. This is not something we can do. I have no way of knowing that the drowning child I pull out a river isn’t Baby Hitler 2. Until the computer-god comes, ethical behaviour means distorting the real world according to a speculative fiction. Some utilitarians sincerely want to exterminate all fish, since they live lives of suffering, and all predatory megafauna, since they cause suffering to other animals, as if they could have any earthly idea what the repercussions of this would be. They are already imagining themselves as an all-knowing computer, the one that can determine which desperate struggling little life has value, and which does not.
Infinite knowledge implies infinite power. You are always the person standing by the lever, and not one of the six people tied to the tracks. You are capable of creating various differently populated worlds. At a minimum, you are assumed to be in some kind of position of detached power relative to those around you, in which you can create certain outcomes for them. Sometimes this is true: a teacher in a classroom might use the utilitarian calculus and send out one disruptive child so the others can learn in peace. But for the most part, this is not how actual people live. We are not states determining policy, we are human beings stumbling through a dense thicket of ambiguous social relations, riven with love and duty, in which our capacity to act is limited. Utilitarianism is for something. But it’s not for us.
None of this should be confused for a critique of utilitarianism. I don’t hate this theory; in fact, I love it. In a certain sense, it’s plainly hideous: lifeless, brutal, reducing us all to preference maximisers, arrogant beyond belief, and utterly opposed to every principle of life and dignity. But it’s also beautiful. You take a simple idea—the greatest happiness for the greatest number of people—and keep running with it until the gap between the idea and an inevitably complex reality starts spawning monsters. I find it hard not to have a general contempt for the rule utilitarians and negative utilitarians and everyone else who tries to close the gap, make the idea stick closer to reality, at the cost of polluting its terrible simplicity. I’m very glad the world contains people like Matthew Adelstein, who will cheerfully endorse the repugnant conclusion and the torture-dust equilibrium along with every other insane artefact of this system, and tell you you’re wrong and anti-moral if you don’t agree. I don’t want these people to have power, and I would never want to believe in any of this stuff personally, but I think having a broad diversity of utterly insane ideas in common circulation is a good in and of itself.
The rationalists are wrong about many, many things, but it’s precisely in their wrongness that they express an important truth about the world: that large parts of it are made of something other than plain facts, and the more you insist on those facts the wronger you will be. I love them, in the same way I love the Flat Earthers and the people who think the entire Carolingian era was a hoax. They are, of course, highly influential in a few small but powerful milieux, and their madness is both an expression of and a motor for the general madness of the age. Unlike the ideas I spread about sixteenth-century heresies, some of their ideas are massively socially destructive. In their instrumental aspect, they are my enemies. But I still don’t want them to stop believing what they believe, or to start believing what I believe instead. I don’t even want them to stop accusing me of lying. I just want them to have a little perspective.
Look: I’ve managed to get through an entire essay on rationalism without mentioning Roko’s basilisk even once, and frankly I think I deserve a bit of credit for it. But did you know that some of these types have ended up independently reinventing the idea of Hell? Like the basilisk, quantum hell is based on the core rationalist doctrine that any exact copy of you, even if it’s simulated on a computer or in another universe, is you, and consciousness can happily skip between these copies. (They’d gladly step into Derek Parfit’s teletransporter, the one that scans your precise atomic makeup, beams the information to a nice beach resort somewhere, and then instantly incinerates you into a small pile of dust. If you hadn’t already noticed, they’re mad.) This implies quantum immortality: whenever you die, your consciousness switches over to an alternate universe in which you survived. This might have already happened millions of times, and it’ll keep happening literally forever. From your perspective, you keep improbably surviving your own death. (Maybe this is what happened to Vishwash Kumar Ramesh.) But because your survival becomes unlikelier and unlikelier with increasing years—as you age, your organs fail, the sun goes nova, the galaxies drift apart, etc—you will end up being shunted into some highly deformed and low-resolution universes. The final stage might be a tiny, stable pocket universe filled with a superheated quark-gluon plasma, which you would experience as eternal suffering in a lake of fire. This is the ultimate fate of every conscious entity after death. None will be saved.
Some rationalists have described the toll this possibility has taken on their mental heath. The long sleepless nights, quaking in holy terror. The way all earthly pleasures seem meaningless when you know what might be coming. The whole idea is nonsense, obviously, but I don’t bring it up to mock it. I love this mad fiction too, and all the counterintuitive ideas that interlock like tiny cogwheels to produce it. It’s just extremely strange that some of these same people are so upset about Laurentius Clung. Brother, you are Laurentius Clung.
Edit (29/07/25): This essay claims that absolutely all rationalists are utilitarians. This is untrue; according to the most recent LessWrong survey, only 64% are consequentialists, with 22% preferring a non-consequentialist ethics.


