Debate with Matthew Adelstein on Utilitarianism (2/5)
I ask for some clarifications and reject the main argument he gives for utilitarianism
Past Posts:
I want to clarify a few things from his latest post and then respond to the two main arguments of his opening section: an argument based on rational agents and a rejection of rights.
I’d agree that given moral uncertainty, we shouldn’t act as strict utilitarians. However, this fact does nothing to show utilitarianism is not correct. This debate is about what one in fact has most reason to do — be it the utilitarian act in all situations or some other — so pointing out what it’s reasonable to do given moral uncertainty (which is much like factual uncertainty) does nothing to show that utilitarianism is not correct. Discussion of how we should practically reason given uncertainty has nothing to say about which theory is actually correct.
Emphasis mine. It’s unclear to me whether Matthew intends this debate to be about what one has the most reason to do or about which moral theory is most likely to be correct. I don’t think these are the same question, since you could have the most reason to take an action that contradicts the moral theory that you find most likely. For example, even if you’d give 3 to 1 odds against moral realism being true, you should still act as if it’s true, since if it’s false then it doesn’t matter what you do anyway.
a strong form of impartiality . . . collectively self defeating. After all, if we all do what’s best for our families at the expense of others, given that everyone is part of a family, every person doing what’s best for their own family will be bad for families as a whole. . .
It’s not clear to me what’s meant by “strong” here. But more importantly, the hypothetical posed is meaningfully different from the question of what to do in an individual case.
Scenario A: A parent faced with a choice between saving his own child and saving the children of two distant strangers has a good reason—his obligation to act in the interests of his own children—to take the first option.
Scenario B: Suppose he were faced instead with the knowledge that 100 randomly selected people (including himself, potentially) would be presented the dilemma above. Given the opportunity to force them all to choose one option or the other, he should force them all to choose to save the strangers. This follows the same principle since it’s in the best interest of his own child as well as all children.
These are two distinct scenarios. Choosing your own child in Scenario A doesn’t somehow force other people to choose the wrong option in Scenario B, and there’s no contradiction in following a general principle that leads you to be partial to your own children in both cases.
Fourth, it seems very clear that when we think in terms of what is truly important, our family members are not intrinsically more important than others
I’m not sure exactly what’s meant by “intrinsic importance,” but from my reading, the idea that decisions should be made based on “intrinsic importance” assumes impartiality, so this is circular. Your particular agent-relative obligations give you good reasons other than anyone’s “intrinsic importance.”
Matthew’s main argument for utilitarianism is an argument in his opening statement in the section titled “A Syllogism.” It’s hard to understand since I’m not sure what’s meant by “good for oneself” and the distinction between “happiness” “happiness for oneself.” My best guess is that self-interested action is what’s meant by “produces the most good for oneself” and that self-interest is what’s meant by “good for people who are rational egoists.”
I’ve rewritten it below based on what I understood the arguments to mean, and so that the reasons for the implications interleave with the numbered claims rather than being listed at the end.
Call a “rational egoist” someone who does only what maximizes his self-interest.
Hedonism is correct.
So happiness is the only kind of good.
So a rational egoist only does what maximizes his happiness.
If something is in the self-interest of rational egoists but not good for people in general, then it must have unique benefits that only apply to rational egoists.
A person’s happiness is in his self-interest if he is a rational egoist, but it doesn’t have unique benefits to him only if he is a rational egoist.
So a person’s happiness must be good for him.
In order for something to be good in general, it has to be in the self-interest of some people.
Because hedonism is true, this is the same as saying that for something to be good in general, it must make some people happy.
Only the total happiness is like this.
So the total happiness is the only good.
We should act in a way that maximizes the good.
So we should act in a way that maximizes the total happiness, which is utilitarianism.
I guess that “happiness” when used separately from “happiness for oneself” means “the total happiness,” because the argument might be trivially circular otherwise? I’m not really sure what’s going on here and it’s possible I’ve changed the argument in trying to rewrite it.
Even if we accept that hedonism is correct, which I wouldn’t grant, claim (8) and (12) aren’t plausible unless you already accept the conclusion that utilitarianism is correct. There are things that are good that aren’t in the direct self-interest of any particular person, and you have reasons to act other than maximizing the good, like to meet your obligations.
Lastly, I’ll address Matthew’s rejection of rights.
Everything that we think of as a right is reducible to happiness. For example, we think people have the right to life. Yet the right to life increases happiness. We think people have the right not to let other people enter their house, but we don’t think they have the right not to let other people look at their house. The only difference between shooting bullets at people, and shooting soundwaves (ie making noise) is one causes a lot of harm, and the other one does not.
Emphasis mine. This principle isn’t universally true. Rights violations aren’t just a kind of hedonistic harm, since you can harm someone without violating his rights and violate someone’s rights without making him worse off happiness-wise. For example, it could make me unhappy that someone exists, but his existence doesn’t violate any of my rights. Someone could also force me to improve my diet, which wouldn’t harm me but would violate my rights.
For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights violation to look at people.
I don’t endorse a position where you should impose horrific suffering to avoid violating any rights, just that respecting rights gives you some good reason for action irrespective of the consequences of your action.
If we accept that rights are ethically significant then there’s a number of rights violated that could outweigh any amount of suffering.
I don’t think this follows. It’s possible that there are different kinds of rights and different kinds of suffering and that some of these are incommensurable.
If my opponent argues for rights then I’d challenge him to give a way of deciding whether something is a right that is not based on hedonic considerations.
I would start with intuitions from particular cases.
Matthew does include among his many arguments the strongest argument against weak deontology: the aggregation problem. Michael Huemer’s A Paradox for Weak Deontology describes a thought experiment:
Torture Transfer: Mary works at a prison where prisoners are being unjustly tortured. She finds two prisoners, A and B, each strapped to a device that inflicts pain on them by passing an electric current through their bodies. Mary cannot stop the torture completely; however, there are two dials, each connected to both of the machines, used to control the electric current and hence the level of pain inflicted on the prisoners. Oddly enough, the first dial functions like this: if it is turned up, prisoner A’s electric current will be increased, but this will cause prisoner B’s current to be reduced by twice as much. The second dial has the opposite effect: if turned up, it will increase B’s torture level while lowering A’s torture level by twice as much. Knowing all this, Mary turns the first dial, immediately followed by the second, bringing about a net reduction in both prisoners’ suffering.
In general, weak deontology suffers from the problem that two actions A and B can both be wrong independently but acceptable if done at once or in quick succession as part of a combined action, which is unintuitive. The best argument for utilitarianism is the flaws in all other moral theories, but utilitarianism’s flaws are more severe and it’s more likely that there is an superior undiscovered moral theory than that utilitarianism is correct.