Many deontological libertarians believe that all of politics can be resolved through inferences from a single moral rule, the non-aggression principle:
NAP: It is always immoral to commit force against other individuals.
Among themselves they may debate the exact meaning of force with regard to property rights, self-defense, fraud, defamation, the rights of children, and so on.
Many of the knee-jerk objections they hear are weak or question-begging: that the market could not possibly provide goods currently managed by the state, that belief is property rights is arbitrary or incorrect, that limited government in line with the NAP would result in a popular uprising. This strengthens their views.
At the same time, there are obvious counterexamples:
Starving in the Woods: You’re lost starving in the woods and come across an empty cabin with some food. You could eat the food or starve. The owner has an explicit policy prohibiting anyone from taking his food.
Lifeboat: You’re in a lifeboat with two other people and it’s obvious that the only way to survive till rescue is if all three of you bail water. You alone are armed and can coerce the other two—who favor an inscrutable strategy based on adding water to the boat—to bail water with you.
These obviously disprove the NAP as stated but this doesn’t stop many libertarians from stubbornly assimilating these situations into their single elegant rule, even though they could keep almost all of their conclusions with a more agreeable rule like “Don’t commit force against others unless doing so would produce much more utility, at least 10 times more if not a higher bar.”
I perceive the utilitarians I know to have a very similar mentality. They begin with their moral rule:
“The utilitarian doctrine is that happiness is desirable, and the only thing desirable as an end; all other things being only desirable as means to that end.”
- Mill, Utilitarianism (punctuation modernized)
Among themselves they may debate between push-pin and poetry; pleasure and preferences; and average and sum. Knee-jerk objections include silly criticisms like that utilitarianism is “too demanding” or that the difficulty of computing happiness justifies ignoring it.
At the same time, there are equally obvious counterexamples:
Grandma: Grandma is a kindly soul who has saved up tens of thousands of dollars in cash over the years. One fine day you see her stashing it away under her mattress, and come to think that with just a little nudge you could cause her to fall and most probably die. You could then take her money, which others don’t know about, and redistribute it to those more worthy, saving many lives in the process. No one will ever know. Left to her own devices, Grandma would probably live a few more years, and her money would be discovered by her unworthy heirs who would blow it on fancy cars and vacations. Liberated from primitive deontic impulses by a recent college philosophy course, you silently say your goodbyes and prepare to send Grandma into the beyond.
Child: Your son earns a good living as a doctor but is careless with some of his finances. You sometimes help him out by organizing his receipts and invoices. One day you have the opportunity to divert $1,000 from his funds to a charity where the money will do more good; neither he nor anyone else will ever notice the difference, besides the beneficiaries. You decide to steal your child’s money and promote the overall good.
(Both examples taken from Bryan Caplan)
This doesn’t require any outlandish situations involving fat men and trolleys or implausible norm-free organ harvesting. Many of you could have taken some stolen some cash from a roommate and spent it ten times more altruistically without being detected, but this is intuitively unethical because the good you would do doesn’t justify stealing.
Just like the NAP-libertarians, many utilitarians share the common sense intuition against the utilitarian conclusion in these scenarios, but either reject the intuition in favor of bullet-biting, declare that the common sense solution must indirectly lead to higher utility, or claim that they would follow their intuition but that this is a moral failing of theirs.
All this when they could just as easily adopt the same rule of thumb at little cost to their important beliefs: “Don’t commit force against others unless doing so would produce much more utility, at least 10 times more if not a higher bar.”
You can read some more arguments against utilitarianism here but my goal isn’t really to argue over which intuitions are plausible. My sights are on those who in my experience most easily converted from bean-counting, who have the non-utilitarian intuitions about these situations but are constrained by feelings like these:
That while people can convince each other about empirical facts, they can’t as easily convince each other of differing intuitions about particular cases, and that this is a reason to reject those moral intuitions. (This is a non-sequitur.)
That they must choose between broad moral claims like that utility is the sole good or that the only good is to follow a set of rules. (There’s no need to do this. You can reject utilitarianism and deontology both.)
That common-sense ideas like supererogation or special obligations are unsatisfying simply because they do not allow you to reduce ethics to certain elegant mathematical operations.
It’s much better to take your intuitions about particular cases to make moral decisions, and to resolve conflicts using the relative credence of your intuitions in light of moral uncertainty. You don’t need to abandon what is obvious in search of what is elegant.
Those examples against utilitarianism don't seem obvious at all, or rather, they are obvious, but for the exact opposite reason. All of those outcomes are ones I'm fine with, ignoring the practical issues of course.
"It’s much better to take your intuitions about particular cases to make moral decisions,"
Why? You haven't actually explained that at all.
I'm hardly in favor of the NAP, but the scenarios that you present don't "obviously disprove" anything. I could come up with a list of scenarios with unfortunate outcomes under the rule, “Don’t commit force against others unless doing so would produce much more utility, at least 10 times more if not a higher bar," but the only thing that would obviously disprove is the idea that a good moral system will never produce unfortunate outcomes.