13 Comments

Those examples against utilitarianism don't seem obvious at all, or rather, they are obvious, but for the exact opposite reason. All of those outcomes are ones I'm fine with, ignoring the practical issues of course.

"It’s much better to take your intuitions about particular cases to make moral decisions,"

Why? You haven't actually explained that at all.

Expand full comment

I'm hardly in favor of the NAP, but the scenarios that you present don't "obviously disprove" anything. I could come up with a list of scenarios with unfortunate outcomes under the rule, “Don’t commit force against others unless doing so would produce much more utility, at least 10 times more if not a higher bar," but the only thing that would obviously disprove is the idea that a good moral system will never produce unfortunate outcomes.

Expand full comment
author

Can you provide that list?

Expand full comment

I think this post is wrong about what the utilitarian action is in many of these cases. I expect that if utilitarian EAs started stealing from their roommates when they were certain they wouldn't be caught, then the world would be a worse place. I'm not a perfect reasoner, I know I'm not a perfect reasoner, and the potential downsides of reputation loss far outweigh the benefit from some additional donations.

Even if it was really possible to be certain that I wouldn't be caught, I still wouldn't steal from my roommate. If it was the case that utilitarians stole from their roommates when they wouldn't be caught, people would simply not be roommates with utilitarians, and so utilitarians shouldn't steal. It's true that decision theory does not apply perfectly to humans, but, at least in my case, I don't trust my ability to deceive others enough to not commit to not steal.

Now, it is true that the above objections are somewhat convenient for utilitarianism, and certainly don't address the least convenient world. There are cases where I do think utilitarianism strongly clashes with common-sense morality. But in those cases, I would still strive to be a utilitarian. I think the idea that a moral system can be shown to be wrong because it clashes strongly with common-sense morality is incorrect. To the extent that my own instincts are a guide to moral truth, I would expect my instincts about moral principles to be a better guide than my instincts for specific situations. Analogously, I think people have good instincts about the principles that probability theory must follow but are often wrong about the probability of specific events, and it would be a mistake to try to construct probability theory by finding the rules that allow you to derive what probabilities humans give to events.

Expand full comment

I'm sorry, but to me each of "Grandma" and "Child" just clearly either seems wrong about what the utilitarian verdict is (i.e. the utilitarian verdict agrees with the common-sense one), or is absolutely outlandish, i.e. very unlikely to ever occur to anyone. Instead of going into details about why that is the case, I'll just copy some relevant paragraphs from Eliezer Yudkowsky (albeit in the context of a different discussion, https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy, but I'm hoping the relevance is clear after reading):

"Q4: Okay, but from a coldly calculating expected utility perspective, why isn't it good to lie to keep everyone calm? That way, if there's an unexpected hope, everybody else will be calm and oblivious and not interfering with us out of panic, and my faction will have lots of resources that they got from lying to their supporters about how much hope there was! Didn't you just say that people screaming and running around while the world was ending would be unhelpful?

A: You should never try to reason using expected utilities again. It is an art not meant for you. Stick to intuitive feelings henceforth.

There are, I think, people whose minds readily look for and find even the slightly-less-than-totally-obvious considerations of expected utility, what some might call "second-order" considerations. Ask them to rob a bank and give the money to the poor, and they'll think spontaneously and unprompted about insurance costs of banking and the chance of getting caught and reputational repercussions and low-trust societies and what if everybody else did that when they thought it was a good cause; and all of these considerations will be obviously-to-them consequences under consequentialism.

These people are well-suited to being 'consequentialists' or 'utilitarians', because their mind naturally sees all the consequences and utilities, including those considerations that others might be tempted to call by names like "second-order" or "categorical" and so on.

If you ask them why consequentialism doesn't say to rob banks, they reply, "Because that actually realistically in real life would not have good consequences. Whatever it is you're about to tell me as a supposedly non-consequentialist reason why we all mustn't do that, seems to you like a strong argument, exactly because you recognize implicitly that people robbing banks would not actually lead to happy formerly-poor people and everybody living cheerfully ever after."

Others, if you suggest to them that they should rob a bank and give the money to the poor, will be able to see the helped poor as a "consequence" and a "utility", but they will not spontaneously and unprompted see all those other considerations in the formal form of "consequences" and "utilities".

If you just asked them informally whether it was a good or bad idea, they might ask "What if everyone did that?" or "Isn't it good that we can live in a society where people can store and transmit money?" or "How would it make effective altruism look, if people went around doing that in the name of effective altruism?" But if you ask them about consequences, they don't spontaneously, readily, intuitively classify all these other things as "consequences"; they think that their mind is being steered onto a kind of formal track, a defensible track, a track of stating only things that are very direct or blatant or obvious. They think that the rule of consequentialism is, "If you show me a good consequence, I have to do that thing."

If you present them with bad things that happen if people rob banks, they don't see those as also being 'consequences'. They see them as arguments against consequentialism; since, after all consequentialism says to rob banks, which obviously leads to bad stuff, and so bad things would end up happening if people were consequentialists. They do not do a double-take and say "What?" That consequentialism leads people to do bad things with bad outcomes is just a reasonable conclusion, so far as they can tell.

People like this should not be 'consequentialists' or 'utilitarians' as they understand those terms. They should back off from this form of reasoning that their mind is not naturally well-suited for processing in a native format, and stick to intuitively informally asking themselves what's good or bad behavior, without any special focus on what they think are 'outcomes'.

If they try to be consequentialists, they'll end up as Hollywood villains describing some grand scheme that violates a lot of ethics and deontology but sure will end up having grandiose benefits, yup, even while everybody in the audience knows perfectly well that it won't work. You can only safely be a consequentialist if you're genre-savvy about that class of arguments - if you're not the blind villain on screen, but the person in the audience watching who sees why that won't work."

I feel bad about the tone of my comment possibly being overly harsh. If anyone is interested, I'm happy to also give concrete arguments for why the utilitarian verdict is not what's claimed in the post (in any such situation with let's say a >10^{-6} chance of happening to anyone).

Expand full comment
author

Alternative Bulverism: people are drawing conclusions first based on actual morality and then retroactively confabulating reasons for its utility.

Also, the situations aren't all outlandish. Stealing without being noticed is not that hard.

Expand full comment

I would gladly think differently about these cases if that's what the utility calculation suggested.

Regarding whether the grandma case is outlandish: much fewer than 1 in 10^6 people have a grandma with cash stashed away whom they could kill with a less than epsilon chance of getting caught, while being the only other person in the world who knows about this cash (while not being the heir), with no one else (including the tax man) finding out about the suspicious donations, while not being psychologically affected enough by this so as to not have a loss in productivity in the future that shadows this one-time donation, with a less than epsilon chance of affecting trust in personal relations in general (note that even a tiny effect on the many people who might read of this could add up to something big), with a less than epsilon chance of contributing to creating an anti-utilitarian craze which leads to many fewer people being likely to donate money effectively, etc. And also while being certain enough about all this to consider one to be increasing utility in expectation by carrying it through. In fact, I would wager that 0 people alive today are in this situation.

Expand full comment
author

By "outlandish" I meant "inconceivable" or "implausible" (in the sense that a doctor harvesting organs in secret is implausible), not rare.

Expand full comment

I think I don't understand the sense in which a doctor harvesting organs is implausible, but these cases are not. (I understand there might be a difference in degree, but I don't understand what difference in kind you have in mind.) The prerequisites needed for either of these cases to go through with a utilitarian verdict that disagrees with common sense also seem rather implausible.

Expand full comment

The stealing case is less outlandish than the grandma case, but it still seems incredibly unlikely. In any realistic case, there is again a significant chance of getting caught one day (or even if there isn't in whatever objective sense, essentially no one will ever know that there isn't (for instance that their son will like never press the "what are my expenses" button in their online bank account or whatever), which is what's relevant for whether the decision is justified in subjective expectation.) And really bad things would happen if you did get caught. Even if you don't get caught, the time almost anyone would end up thinking about this alone (both before and after the act) might well be used instead to generate more than $1000.

Expand full comment

> Just like the NAP-libertarians, many utilitarians share the common sense intuition

That the earth is flat?

That God is watching us?

That an object's velocity is proportional to the force acting on it?

Why does everyone accept deeply counterintuitive propositions which have been very well established by empirical methods, but then turn around and insist that common sense intuition is the right way of checking things?

Expand full comment

You should continue posting. If you ever want to discuss ethical intuitionism with someone, reach out to me.

Expand full comment

> (Both examples taken from Bryan Caplan)

That post seems to be mostly quoting Dan Moller's book.

I'm confused why the "Grandma" example includes murder instead of just stealing though, seems like killing Grandma strictly reduces total utility. I can't help but suspect it was sneakily added in just the make the Utilitarian seem more evil.

Expand full comment