44 Comments
Aug 23, 2022·edited Aug 23, 2022Liked by Erik Hoel

The fundamental error in utilitarianism, and in EA it seems from your description of it, is that is conflates suffering with evil. Suffering is not evil. Suffering is an inherent feature of life. Suffering is information. Without suffering we would all die very quickly, probably by forgetting to eat.

Causing suffering is evil, because it is cruelty.

Ignoring preventable suffering is evil because it is indifference.

But setting yourself up to run the world and dictate how everyone else should live because you believe that you have the calculus to mathematically minimize suffering is also evil because it is tyranny.

Holocausts are evil because they are cruel. Stubbed toes are not evil because they are information. (Put your shoes on!)

Expand full comment
Aug 23, 2022Liked by Erik Hoel

The diluted form of utilitarianism that makes the most sense to me, and which does still feel compatible with the general EA ethos, is one in which you don’t feel constrained by the results of the utilitarian calculus, but you should actually make the effort to do the math before deciding.

For example, before choosing where to give money to charity, I think it’s very much worthwhile to try and do some kind of calculations to compare them. This forces you to actually consider all the factors and identify your unknowns. But your decision should still be based, in the end, on what seems like the best choice, even if one has a higher expected value on paper.

This isn’t a formal philosophical way of looking at things, but it seems like it avoids the failure modes where you don’t do the math and end up donating to local causes that don’t need as much help as international ones AND the failure modes where you would trade massive numbers of stubbed toes for the Holocaust.

Expand full comment

Coincidentally Scott A also reviewed W MacAskill's book today. Having read both reviews one after the other I have more than a headful of philosophising about morality. In fact that particular behaviour, more than any other is deserving of the expression "pouring from the empty into the void", and perhaps that's my personal deep-rooted objection to EA, Utilitarianism, and to some extent the wider rationalist movement. It's a whole lot of head noise utterly divorced from a human scale, inter-personal considerations and any kind of emotional intelligence. It is unsurprising that one of its outputs is repugnant conclusions.

Oh, I very much enjoyed your review/follow-up. I preferred it to Scott's, which is about as much praise as I offer in an average week :)

Expand full comment
Aug 23, 2022·edited Aug 23, 2022Liked by Erik Hoel

One of the biggest weaknesses, in my view, of Utilitarianism is the fact that given some assumptions about the "moral value" of a set of actions/events, anything is defendable. The fact that some of your interlocutors will insist on adding "epicycles" to reason away the various repugnant conclusions of a line of utilitarian thought belies this underlying fact. Give me a vector of utlis long enough, and I shall move the world.

That said, I think that there are useful tools which can come out of utilitarianism. In the finance world, the Black-Scholes model, while certainly "incorrect" in its underlying assumptions and predictions, is still an immensely useful model with which to understand the risks associated with optionality. Take the model to its extremes, however, and it quickly breaks down. I think of Utilitarianism in a similar vein. It's key assumptions are certainly wrong, but it is still a rather useful tool to have when examining the possible morality of different courses of action. You just need to understand that, as a simplistic model of a much messier reality, it cannot be taken as truth. The map is not the territory etc etc.

Expand full comment
Aug 23, 2022Liked by Erik Hoel

Something that came up in a lab meeting here today: exactly what is *really* going on, as a psychological matter, in the Utilitarianism of EA? (You gesture at an interest in this question this in your portrait of Lovecraftian Cape Cod.)

One thing in play is the role of explanatory values in moral reasoning -- meaning, what it is about moral explanations that makes one more satisfying than another. Virtue ethics (which you gesture at in your polar bear poem) is a very distinct mode from EA but it really fails to get traction with people these days, and it’s often dismissed as “aesthetics” (a Gen Z term of art I don’t quite grok).

Utility theory has the advantage of producing highly co-explanatory moral justifications. “X is moral for the same reason Y is moral”, even when X and Y are in very different domains, timescales, and contexts.

I don’t know what is behind the shift towards increasingly extreme versions of utility theory (MacAskill saying basically the RC is probably right is insane, and Parfit rolls in his grave!) But it might lie in the structural features of the underlying moral cognition.

I see EA as having a really profound attraction for young people. Mainstream media is well behind the curve on this -- I started getting EA-interested students five years ago. It’s quite impressive.

Expand full comment

Thanks for both essays good sir, I spent an afternoon remembering why I never was satisfied by utilitarian arguments in my philosophy classes. Your mention about California being a utopian land that breeds these ideas particularly resonated. There’s something particularly nefarious about the continued American obsession with Manifest Destiny and it’s more contemporary technocratic/consumer cultural form, which one could easily argue has been the root cause for the very same problems wealthy EA folks out in California think they have a duty to fix. Colonialism takes on many forms, indeed.

Expand full comment

Thanks for writing this, and your previous EA piece. I have long felt that many of the EA arguments, especially those comparing dust motes and murders, were obviously wrong. The unfortunate thing about the obvious is that it’s much harder to articulate a Why.

Expand full comment

One of the more troubling and little-noted features of these grand moralizing theories is the unspoken subject addressed by the so-called moral obligations.

Throughout these writings, there is a persistent "we" appearing in many normative sentences of the form:

"We ought to..."

Fill out the ellipsis with your preferred act, event, or state of affairs.

Whenever I see this in a moral argument, I find myself opting for the Tonto response.

"Who is 'we', kemosabe?"

That's not intended as a flippant rejoinder. Whether it be traditional Utilitarianism, its updated cousins based in rules or alternative states besides pleasure/suffering, or EA itself (whatever that is after the endless gerrymandering), the "we" is persistent.

Who or what is this entity?

It's never made explicit, which I believe is by design.

Depending on your moral imperative, it can be you (the individual human being with agency), or it can be a nebulous collective agent, such as an ethnic group, a nation state, or one of those trans-national organizations that do much of the heavy lifting these days.

Nice as it is to speak for all and everyone from that View From Nowhere, it's not going to convince everyone as a *moral point of view*, and for reasons raised in this article.

There is a real and important difference between having a map of the national park and making the 3-day hike through it. But few of these abstract moral [sic] theories seem to notice or care about that omission.

If the humans hiking through the forest in real time enter into the equation at all, it's only in the form of proxy states like "pleasure" or "suffering". A real and healthy human person is not moved by a single source of moral reasons or considerations, not least of which because a real and healthy human person is more than a single dimension of psychological content.

MacIntyre never wrote about EA directly, but his attacks on the figures of the Manager and the Therapist are just as appropriate. EA is management, not morality.

Expand full comment
Aug 24, 2022Liked by Erik Hoel

Galileo's Paradox and ignorance of set theory : why 'long termism' is ncompatible with 'Mound o' dirt' Utilitarianism.

Infinite Sets are a source of interesting paradoxes. "What is larger," wondered Galileo Galilei in _Two New Sciences_, published in 1638, "the set of all positive numbers (1,2,3,4 ...) or the set of all positive squares (1,4,9,16 ...)?"

For some people the answer is obvious. The set of all squares are contained in the set of all numbers, therefore the set of all numbers must be larger. But others reason that because every number is the root of some square, the set of all numbers is equal to the set of all squares. Galileo concluded that the totality of all numbers is infinite, that the number of squares is infinite, and that the

number of their roots is infinite; neither is the number of squares less than the totality of all the numbers, nor the latter greater than the former; and finally the attributes "equal," "greater," and "less," are not applicable to infinite, but only to finite, quantities.

See 'Galileo's Paradox' https://en.wikipedia.org/wiki/Galileo%27s_paradox .

Which brings us to the utilitarian's favourite hobby horse -- trolley problems. Each person tied to the tracks represents an infinite set of possibilities of things that that person could do, in the future, if the train doesn't run over them. Because I am a long termist, I care about depriving the future of all these possibilities.

The people who think that killing 5 people is 5 times as bad as killing only 1, are reasoning in the same way as the people who believe that the set of all numbers must be larger than the set of all squares. They are flipping the train switches because the results align with their ignorance of the properties

of infinite sets. A rudimentary course in set theory will remedy this problem. (And, as an added bonus ... set theory is fun.)

'Mound o' Dirt' Utilitarianism is thus dead in the water. Infinite sets are not fungible, and neither is moral worth.

Expand full comment
Aug 24, 2022Liked by Erik Hoel

I love your articles, Erik.

Completely agree with your disgust toward utilitarianism.

Since when did it become thinkable to harm anyone for the "greater good"?

As Steve Pinker points out, the greatest atrocities in history were claimed to be for the "greater good."

Expand full comment
Aug 23, 2022Liked by Erik Hoel

I can’t thank you enough for writing these kinds of pieces. I’m not sure anyone else exists in this same kind of niche; simultaneously aware and up-to-date on the kind of technological futurism and Kurzweilian philosophy the likes of SA, Hanson, Eliezer, and others traffic in, while being uniquely skeptical as to their effects.

I fully believe that, even though consciousness will not be a “solved problem” for AI, some level of AGI will nonetheless exist that has incredible universal ability and inevitably, incredible capacity to deviate from its programmers’ intentions. Rather than accept that and become a booster for its creation, the idea fills me with a kind of dread that I can’t shake.

Thank you for intelligently vocalizing some of these concerns in novel and interesting ways. Also: The Revelations was fantastic.

Expand full comment

An additional counter to the serial killer surgeon "That all should go into the equation" response: even though a utilitarian may agree that the fear over this hypothetical surgeon should be part of the equation, they also are actively working to persuade people NOT to fear it in the first place. I'm not sure it's really a defense of the idea to say "don't worry, if people still hate the idea we'll factor that in."

In fact...is that even internally consistent? We're talking about almost infinite advances in util, people! Surely pushing through our short-term disgust with such things is well worth the upside in the long run. If anything, the cultural fear/pushback could be used as evidence that we must push even harder.

That, for me, is a big part of the problem: I have an inherent distrust of any idea that can use resistance and/or failure states as reasons to do even more, rather than reasons to do less (e.g.: "we're spending so much money on this social program and the results are poor, clearly that just means we need to spend even more").

Expand full comment
Sep 3, 2022Liked by Erik Hoel

Congratulation! - Not just me, but many others voted your review as the best https://astralcodexten.substack.com/p/your-book-review-the-dawn-of-everything?utm_source=substack&utm_medium=email

and I am very glad to see you as the author and winner. Very well done! Outstanding text! Standing ovations, indeed.

Expand full comment

I'm wondering what you think of GiveWell? I like GiveWell's approach to charitable giving, and least until recently, I thought it was the most practical manifestation of effective altruism. It was disconcerting to see people talking about a "bednet phase" in the New Yorker review.

It seems like reccommending charities based on some rough but rigorous investigation into cost-effectiveness should be possible?

Expand full comment

Love the coda! I would agree that the environment we live in significantly shapes our worldview, so living in a place of perfect weather and abundant sunshine would give you a more optimistic outlook. Makes sense, then, why Californians are the way they are.

Expand full comment

Thoughtful piece about this book and the criticism is very fair, even though the main point and thesis of the book that we should be acting with a longterm view is 100% correct. We don't need utilitarianism at all to come to this conclusion and belief. Utilitarianism like any single philosophy has good and bad aspects to it. You have keenly identified so much of the negative here. Utilitarianism is cold, inhuman, detached, and "rational" and does not account for the lived human experience. As you point out it leads to horrifying conclusions when it is unfolded step-by-step according to its own tenets.

Any approach to solving the metacrises of the 21st century is going to require an approach that is integral and accounts for the full lived human experience, internal and external, as well as all of the best science and philosophy applied together.

One of our greatest failures as a species would be to not learn from the modern era in which pure science and pure reason were elevated as the only value. The modern era has given us so much valuable knowledge and many tools, but it must morally, ethically, and spiritually be integrated at the individual and societal level to be of use in advancing our species. We know enough now to know we can't utilize any single religion, discipline, or philosophy to solve our problems. To pretend otherwise would be naive.

Thanks for your excellent article.

Expand full comment