159 Comments
Aug 15, 2022Liked by Erik Hoel

Seems to me that the standard approach to criticizing any philosophy - personal, political, economic,... - is to use exceptions to negate the rule (except for when your own philosophy is in the firing line).

The hubristic notion that humans can develop (or is it identify?) a rule that can cover all situations is the source of so much animosity AND idiocy. Humans are flawed and limited, so anything that results from their efforts will be flawed and limited.

Why can't we just accept that and "pursue perfection" instead of "demanding perfection". Perfection can never be attained, but pursuit of it is a worthwhile activity.

Expand full comment

This is my favorite piece of your writing to date. I share your perception of utilitarianism, and I thought this was a great, timeless meta-summary of the ways it goes crazy but also why EA is good in the short-term. Much of my conflicts with EA come from their self-professed criteria that, for a problem to be suitable for EA, it must be important, neglected, and tractable. Neglect seems like a cop-out from trying to contribute to popular things that are still incredibly important, and tractable is so subjective that it could justify whatever you want (ex: AI safety doesn't seem tractable because we don't actually know if or how to generate machine consciousness, but math PhDs swear it's tractable and thus we have Yud's AI death cult. Meanwhile, permitting reform seems really important, but math PhDs say "not tractable" because you'd have to build a political coalition stronger than special interests and if it's not solvable with a blog post or LaTex, it's impossible.)

Expand full comment
Aug 15, 2022·edited Aug 15, 2022Liked by Erik Hoel

The flat rejection and denial of qualitative properties is the hallmark of modern moral philosophies. Rawls is as guilty of this as any utilitarian or consequentialist.

Without qualitative properties, the issue of "morality" turns into a tidy package of easily quantifiable goods and goals which are, of course, open to the rationalizations and calculations preferred by the so-called "rationalists".

The only thing they lose in the process is any real sense of goodness and its opposite. There are no intrinsically good/better goals or ends. Every goal and act can be assessed with the same flattened amoral logic. Which, when you think about it, is a damn strange way to get to a *moral* philosophy.

When an effective altruist uses the word "good" and relative evaluative concepts like "better", understand that they use them as placeholder terms which are meaningless. They have no referents.

What is a "good state of affairs"? A "good event"? A "good outcome"? Any answer that isn't uselessly vague has to get into precisely which things are good and for what reason. (That way of reasoning hasn't gone very far since G.E. Moore.)

So the EA folks *stipulate* that this or that state of affairs is good or better or whatever. But that's all sleight of hand to change the subject away from morality while hoping nobody notices.

Behind that evaluation is a bare description of the facts. And if I happen not to care about the facts they care about, what of it? It rapidly decays into the usual relativist hash arguing over "which good and whose good"

It's an empty, bankrupt form of ethics that pleases the ruling classes and justifies their projects of managing society at scale. Otherwise I give them little attention.

BTW - as I never tire of mentioning, Philippa Foot first introduced the Trolley Problem in her paper "The Problem of Abortion and the Doctrine of the Double Effect" [edit: mentioned the wrong paper] as an example to show how our moral intuitions differ according to what is done deliberately, or what is allowed to happen, even if the outcomes are identical.

Her point was not that one ought to do this or that in the trolley case. She's illustrating that the intention and motives of the moral agent are highly relevant to how we judge the moral worth of actions, and that the consequentialist's logic cannot get to grips with this.

Effective altruists screech about "good!" and "better!" without having the first glimpse of these problems. This is why they are awful.

Expand full comment

Quite persuasive but it seems ‘Why I Am Not a Utilitarian’ would have been a more appropriate title.

Expand full comment
Aug 15, 2022Liked by Erik Hoel

Wow. That's what is called a hit-piece, I guess?! As for me: I am smashed. - And you should get that prize! - For crazy personal reasons, it had the funny side-effect for me to work as an apologia to be a Christian: It is all right, actually, just keep it LOCAL! - Which is exactly what Jesus said, lol: "Thou shalt love thy neighbour as thyself." (Matthew 22-39) Thy *neighbour*. That could be paying a medical treatment of a baby in Luzon (if you happen to be there or being married to his mother's sister). Or adopting a stray dog in Turkey (source: "Saving Lucy" https://www.amazon.com/Saving-Lucy-girl-bike-street/dp/193771585X - warning: strong stuff!), if she crosses your path and touches your soul.

It may be Newtonian morality/theology, but on that level it shall work well enough. There is no Einstein of Morality, as yet - or is there?

Properly "diluted poison" might not even be a bad thing: Think broccoli. Or Paracelsus: “All things are poison and nothing is without poison; only the dose makes a thing not a poison.” - A faith that wants to cover all - yep, that stinks of poison. Think of Sharia, think Popes, think Maos. But see Mark 12:17:

Then Jesus said to them, “Give to Caesar the things that are Caesar’s, and give to God the things that are God’s.” The men were amazed at what Jesus said.

Jordan Peterson commented: "There you are: Secularism two-thousand years ago. In two short sentences. A miracle."

p.s.: Here is hoping E. Yudkowsky will not run into trouble being called a "pro-lifer".

Expand full comment

I find that this is another instance where we've put lipstick on a pig, but maybe we've added a little blush and some eyeliner. While I agree with your conclusion about how effective altruism can proliferate against all other options (dilution), it's still a pig. Most people who do not adhere to a truth outside themselves are interested in one thing, and one thing only -- maximizing self-preservation. They choose philosophical models that best optimize their chances for long life, health and wealth, at all other costs. They'll tell you it's because their philosophy benefits humanity the most, but when you dig deep it's only because they want to land in the majority camp, where self-preservation is optimized. Nobody wants to be the organ donor. They all want to be the organ recipient.

Expand full comment
Aug 15, 2022·edited Aug 15, 2022Liked by Erik Hoel

This post effectively captures why I went from being strongly anti-EA in the mid-2010s to being sympathetic to (though still critical of) the movement. Back then, the movement really did seem like a bunch of utilitarian fanatics, but now it has, as you say, diluted the poison. I now feel I can be sympathetic to much of EA’s goals, and even contribute to them if I ever have the opportunity, without being a utilitarian.

I do have concerns about the movement’s ongoing shift to longtermism though. While focusing more philanthropic attention on pandemic prevention, for example, is good, I’m uneasy about the increasingly strong emphasis on apocalyptic AI scenarios.

Taking the outside view, I see a bunch of analytic philosophers and tech people (and those with similar dispositions and social circles) convincing themselves that interesting intellectual work on AI, which they are uniquely qualified to do, is by far the singularly most important thing to work on. When I read EA discourse online, I can just tell that EAists find AI more fun to talk about than anything else. I also see worrying signs of groupthink about this topic in the space, such as the fact that believing in short term timelines for the emergence of superintelligent AI seems to be becoming a strong signal of in group membership. Ajeya Cotra, one of the leading EA thinkers on AI timelines, straight up admitted that she is biased against people who argue for longer AI timelines and doesn’t find their arguments as salient [1]. I give her credit for admitting as much, but it’s a worrying sign.

To be clear, I think AI safety in a broad sense is a real issue that is neglected by wider society. What I’m specifically worried about in EA is the near term (in the next 25 years, say), “hard takeoff”, super intelligent AI apocalypse scenarios that seem to be the emerging EA consensus.

1. https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines

Expand full comment
Aug 15, 2022Liked by Erik Hoel

I'm not a utilitarian, and harvesting 1 person's organs to save 5 still seems obviously correct to me. Why is a healthy person more deserving of life than a sickly person? I find it abhorrent to treat different people's lives as though they are of different intrinsic value.

I'm sure you disagree. My point is that different people have different moral intuitions, and simply dismissing anyone who has different values as "warped by dogma" is not likely to convince anyone who doesn't already agree with you.

Expand full comment

This crystalized a lot of things for me. Thank you. If you're looking for requests for future posts: what other moral philosophies are there? Anything that works better than utilitarianism?

Expand full comment
Sep 26, 2022·edited Sep 26, 2022Liked by Erik Hoel

"Does computing the expected utility feel too cold-blooded for your taste? Well, that feeling isn’t even a feather in the scales, when a life is at stake. Just shut up and multiply."

If we're leaving feelings out of it, and being cold-blooded calculators of utility, then why on earth am I supposed to care enough to bother doing the calculation? The entire idea of helping others is rooted in our feelings. Denying these undoes the entire project.

Asking people to treat others humanely by denying their own humanity has to be one of the silliest philosophical projects that man in all his flawed humanity has ever devised.

Expand full comment
Aug 15, 2022Liked by Erik Hoel

You say you aren't sure what a practical solution could be. I think one solution is that instead of advocating for the greatest good for the greatest number, as utilitarians do, we could advocate for measuring outcomes. The work coming from GiveWell isn't original because it's utilitarian so much as it's original because outcomes were measured. Benefactors started measuring how school attendance was impacted by deworming, for instance, instead of just assuming that the best way to increase school attendance would be to hire more teachers.

Maybe EA should try to distance itself from its assessments - to make more neutral statements like "if we do X then Y will happen" rather than "we should do X". However then you get into questions like "how do you define EA?" which I don't know a real answer to. I kind of have my own beliefs independent of "EA doctrine", but is there such a thing as EA doctrine anyways? I'm not sure.

Expand full comment

Back in 2011, SSC wrote a section in his Consequentalism FAQ entitled, "Wouldn't consequentialism lead to [obviously horrible outcome]?" (https://web.archive.org/web/20161115073538/http://raikoth.net/consequentialism.html)

The answer is of course not. If you think that utilitarianism would lead to [obviously horrible outcome], then by definition that wasn't utilitarianism! You were leaving something out of your equation.

The world with rogue surgeons hunting people on the streets is a world where billions of people would be outraged and terrified. That all should go into the equation. It is not a world that utilitarianism would recommend.

You refer to a lot of this as epicycles, but really this is a simple and straightforward application of utilitarianism. When utilitarianism calls on us to estimate the consequences of our actions, that means *all* of the consequences, including second and third order consequences. Those aren't epicycles.

Expand full comment
Oct 14, 2022Liked by Erik Hoel

This is excellent.

My intuitions made me averse to utilitarianism prior to reading any pop ethics repugnant conclusions. I recall being presented with trolley problem as a freshman philosophy major and thinking, “I don’t think I would pull the switch, who am I to decide who lives and dies?” My intuitions are perhaps more legalistic, as the law recognizes a difference in culpability between acting (pulling the switch) and failing to act (not pulling the switch). It is perhaps no surprise that I am a good old fashioned due process oriented classical liberal!

While it may seem like a simple math problem to many, for me it’s obvious that the logical next step is to start murdering homeless people and harvesting their organs…I agree wholeheartedly with your statement “I’m of the radical opinion that the poison means something is wrong to begin with.” For me, totally non intuitive consequences always signal false premises somewhere. Ultimately philosophy doesn’t have anything else to go on but our intuitions (and of course checking them against logical and empirical reality).

So as you say there are domain specific utilitarianisms that are fine, they amount to being effective or efficient with respect to some predetermined (and justified) goal, but utilitarianism cannot provide that goal. It’s no master morality.

Expand full comment
Aug 16, 2022·edited Aug 16, 2022

If effective altruism is only as good as Newtonian physics, that's not a criticism but an endorsement.

The engineers who build your cars and airplanes are working off Newtonian physics, not Einsteinian. Newtonian physics works great, as long as you remember not to use it up at relativistic speeds or down at quantum-mechanical sizes. At ordinary life scales, Newtonian is the right tool for the job.

Likewise, if you're a donor who wants your individual gifts to make a difference, at ordinary life scales, effective altruism is the right tool for the job.

I suppose there are people who really advocate total social transformation on the basis of napkin math and effective altruism. But why do I have to believe them just to use effective altruism for my personal charity?

It's a false choice! I'm not obliged to believe an author is God, just because I like and learned a lot from his books. And I'm not obliged to ask effective altruism to give me answers on a scale no single social program ever has.

Using an effective altruist mindset has limits, just like using Newtonian physics. But I'm glad I have something sharper than superstition and easier than Einstein to build bridges and airplanes with. And I'm likewise glad to have better tools to guide my charity.

I just don't see a need to believe that effective altruism is the Einstein of charity. It's pretty great just as the Newton.

Expand full comment

While effective altruism is not entirely utilitarianism, you are correct that it has been highly influenced by utilitarianism. Like you, I have expressed a great deal of criticism about the totally unintuitive consequences of utilitarianism. Just this last Saturday I wrote an article criticizing utilitarians that do not embrace the full implications of their philosophical beliefs [1]. Namely, I thought that Scott Alexander should see fertility as a major problem because it's a lot of human welfare that is lost unnecessarily. I found his response inadequate. It was that he is either indifferent between population sizes provide they are cool/interesting or he has a scaling function which he will not explicitly state "lest [I] trap [him] in some kind of paradox." Not all utilitarians are total utilitarians, but I share your feeling that the modifications to avoid the Repugnant Conclusion seem like epicycles. They seem incredibly ad hocish to me. I do know of a blogger who accepts all the bad seeming conclusions, which I respect a lot. [2]

Generally, I would say that I am very supportive of effective altruism despite the philosophical disagreement. Yeah, taken to it's extreme, the utilitarian philosophy seems absurd, but thankfully EA seem to not fully consume the poison, to use your analogy. I think that metaphor is a bit over the top though.

With regards to the RC, I think that you do have to accept the comparison. I accept that world Z is better than world A in moral value terms. However, I don't believe that there is a moral obligation to bring us to world Z. That seems fair. Otherwise, as you note, you have to come up with some solution which doesn't also have an even more ridiculous conclusion. I discussed a number of these in my article [1].

[1] https://parrhesia.substack.com/p/in-favor-of-underpopulation-worries

[2] https://benthams.substack.com

Expand full comment
Aug 15, 2022Liked by Erik Hoel

Very interesting read! I’m not that familiar with EA, but as a summary I seem to understand that ‘diluting’ the poison into already (generally) agreeable moral stances does not add anything to the moral debate. Although with a nice packaging and a strong online presence, the movement is pushing things forward. Who would morally object to that?

Expand full comment