back to the list

The Problems with Utilitarianism

Utilitarianism in principle gets pretty close to what most people actually want from political affairs: the greatest good for the greatest number of people - who doesn't want that? Of course for all of the actual objections I have to the tradition of Utilitarianism, I support the maximization of good too (duh), I only realize it's unachievable as a social engineering problem.

Maximizing

So the first problem is one any mathematician will realize right off the bat: it's impossible to maximize a function for two variables. That is, idealistically we can maximize the amount of good in society or the number of people who feel that good, but nearly certainly not both (if we can it's a bizarre coincidence). It's sort of like saying you want to find a house with the highest available altitude and the lowest available mortgage; the highest house might not have the lowest mortgage and vice versa, the same way the way of running society which maximizes happiness is nearly certainly not be the way which maximizes individuals' happiness.

There are some classic moral puzzles that bring this out. Let's say there's a city where basically everyone is in absolute ecstasy, but their ecstasy can only take place if one particular person in the city is in intense and indescribable pain. Or to put it another way, to maximize my happiness, we might need to make everyone in the world my slave and allow me to rule as I please. Although this might maximize my happiness, it nearly certainly does not maximize anyone else's (if it does however, we might want to consider it).

The Well-being of Conscious Creatures

So I recently read Sam Harris's The Moral Landscape which is either a failed attempt to bring Utilitarianism back to life or a misguided book simply ignorant of what the problems with it were. Harris repeats an mantra basically every paragraph of the book: "the well-being of conscious creatures - the well-being of conscious creatures - the well-being of conscious creatures." In addition to being repetitive, the term is problematic for important reasons. So Harris wants our Utilitarian engineers to maximize "the well-being of conscious creatures," but the problem is we can't just add up enjoyment in the first place. There's no way of taking my enjoyment of candy, subtracting the pain of a broken nose and adding/subtracting an existential crisis or two.

Now his hope is eventually we'll understand the neurology of the brain enough to do just that. I don't take Harris for a fool, and he does have a PhD in neuroscience, but I think he's ignoring all the important problems either to appeal to a public audience or just to convince himself. We can study the neurology of feelings and get readings of neural activity, but objective neural activity is certainly not subjective experience. Twice as much neural activity doesn't mean "twice" the subjective experience.

Of course one of the problems of qualia/subjective experience is that they are necessarily unquantifiable: imagine how you felt the last time you got a present you really enjoyed - now imagine yourself feeling exactly twice as happy - now 1.5 times as happy - now 100 times as happy. You can't do it, and even if you could, you couldn't compare that experience with other experiences - you can't really understand what it means to be as happy as you were sad a month ago, and that prevents us from actually adding up your experiences into one number to be maximized. But again even if we could it would be impossible to add that number up with someone else's experience. Humans have different subjective experiences: caffeine affects me demonstrably different than other people, but I can't quantify that; some people are more affected by pain (to my understanding, women seem to have a neurology more pain-prone than men), but how can we precisely relate the precise ratios of every individual person?

And of course, although Harris wants to maximize "the well-being of conscious creatures," we have no clue what kinds of conscious experiences define animal life, or how many animals are "conscious" in any recognizable sense. As Thomas Nagel noted, we can't even begin to imagine what it's like to be a bat, but to quantify their experiences and compare them to our own? Forget about it! Plus it's very possible as I've noted elsewhere that it's very possible for a conscious creature to be indifferent to pleasure or pain or well-being. Of course, even if you're an vicious anthropocentrist like myself who thinks that we should organize human society for humans and let other animals and aliens alone to live their lives as they always have (crazy huh?) you still have to deal with all the other intractable problems.

"Rights"

But the name of the game today has been "but even if we could," so let me say, even if we could maximize a function for two variables, even if we could quantify and add together subjective experiences, Utilitarianism is still bunk. Why? Utilitarianism is a social engineering scheme, but it's also a moral theory. It says that the maximization of good feeling is the height of human morality, you might think that sounds good, but I don't think you agree with it. Now Nietzsche was clever enough to note that in moral philosophy, we're not looking for answers (we already intuitively know what is right and wrong), but we're looking for patterns in morality and a way to systematize moral systems (for example the Golden Rule is a good example of this (and a good example of a bad rule)).

So let me suggest a scenario to explain what I mean. Let's say I'm a rapist. Let's say one day I rape an unsuspecting woman. Now the experience I feel might be elevating and enjoyable, let's say it's like Clockwork Orange with Beethoven's Ninth Symphony pounding in my head. But what about the woman? Let's say she didn't enjoy it, but it was only as bad as stubbing her toe or getting a hangnail - a minor annoyance. Now from a Utilitarian perspective, there was an enormous net increase in collective happiness; in other words, a serious Utilitarian wouldn't just say this wasn't a crime, but would say it was an act of social improvement, in fact, a moral necessity.

Now any sane person would say that this situation was breathtakingly immoral. But to make it even more dramatic, let's say that I commit rape against a woman, and she decided mid-way that it's not half bad at all, maybe even she enjoys it. Does this make the rape moral? I say emphatically "no" (good to know me and feminists agree about something), but I suspect someone who takes Utilitarianism seriously would have to say "yes." After all, the action of rape in this circumstance improves everyone involded's well-being and damages no one.

Related to this is Robert Coase's ideas of property rights. "How?" you might ask. Well all humans, even self-avowed anarcho-communists have an implicit idea of property rights, and again Utilitarianism violates these implicit property rights. Now let's say I am now a robber, and I pickpocket a man of his wallet, which contains five dollars. Let's say the man notices and informs a Utilitarian Coaseatron 4000 police enforcement bot. The Coaseatron asks us what happened and I own up, saying I stole the wallet. The Coaseatron then begins shuffling away without a care, but the angered man, grabs him lamenting, "This man just admitted to stealing my wallet! Why won't you do something?" The Coaseatron will doubtlessly reply, "Well there are five dollars in the wallet, which he can now benefit from. Before there were five dollars in the wallet, which you could benefit from. Net utility has not changed. If he would return the wallet to you, sir, he would expend extra energy moving his hand to put it in your hand, which would be a marginal social loss. I thereby conclude that it is socially optimal that he keep the wallet." Ladies and gentlemen: Coasean property rights. (This is why libertarians hate the guy (and probably me for treating five dollars as equally "valuable" to two people with subjective valuations).)

In the Context

Utilitarianism is intuitively appealing because at the surface, who wouldn't want to maximize net happiness? The problem is, when you come to the table trying to address every individual situation from net utility, not only do you come to questions unanswerable due to the nature of subjective consciousness, but you get absolutely ludicrous results because of the narrowness of the engineering problem of Utilitarianism (as a side note, a lot of it has to do with the forgotten fact that human social life is an iterated game). So a lot of people will doubtlessly say, "Well Utilitarianism is clearly not perfect, but it's probably the best method of social engineering yet [>implying there are any good ones]." But frankly that shows an ignorance of other possibilities already out there and a misunderstanding of the (potentially unbounded) problems in Utilitarianism which I have only scantly alluded to.

So what's my suggested alternative? Simple: consent. Sure we could (not really, but hypothetically) plug people's (and animals') brains up to machines that measured their subjective valuations and compare with them everyone else's to make their decisions for them, or we could simply say that a moral (or at least non immoral act) is one that we do with the consent of all affected parties. One of the elementary (yet groundbreaking) things that French classical liberals realized is that generally people consent to actions that influence them positively. Thus a voluntary action or exchange is one that benefits everyone. On the other hand if only one affected party consents and other doesn't, this is a good indication that the other person will be damaged by the action (like in our examples of rape and robbery). Instead of banning W, X, Y and Z which the social engineers think to be damaging to social well-being (maybe marijuana, soda, homosexuality and the wage-price system) it would be better to ban nothing and say so long as everyone engaging in the smoking, gulping, gaying and capitalisting is consenting. This does precisely what Utilitarianism alleges to do with simple principles; any problem it might have is not absent from Utilitarianism.

So again aside from being impossible at every singe level, Utilitarianism is an active engineering affair that would require an intimate (were I more dramatic, I'd say fascistic) public knowledge of every person's inner world. We could easily replace a engineering problem with a simple heuristic or principle like "consent" or theoretically any other applicable one and get the same thing done immensely more efficiently. Incidentally there's some annoying fetish Americans have for "pragmatism" which I've never understood. They like Utilitarianism because it's "undogmatic" and "unideological," to translate that, it means inconsistent and unpredicable for the people under its yolk (and of course fundamentally naĂŻve for reasons we've seen). Being governed by an ideology, whether that's American libertarianism or Maoism is superior just because you know what you're getting. But when a politician describes himself as a "pragmatist" or a "Utilitarian" you should know that means, "Oh I just do whatever I feel like depending on how my advisory committees construe things beyond our comprehension or what funders promise their support for." Political pragmatists, the Kissinger-Obama types are always just meaningless and manipulable shells.

Anyway, the tradition of Utilitarianism was always a failure, but it's an interesting sign of the times. The Enlightenment was a time of some (less than usually thought) scientific advancement and the idea was that as we began to understand the nature of the body and the stars and everything else, we could fully understand too human society. Eventually we could engineer and control them all. But as fast as we learn things about the world, even faster do complications arise and we end up "[restoring nature's] ultimate secrets to that obscurity, in which they ever did and ever will remain" in Hume's words. This is most true of the nature of human affairs, not just because they rely on a totalitarian understanding of the unquantifiable traits of the human mind, but an exhaustive understanding of all the possible ways human action reacts with itself. My point here is that Utilitarianism is an attempt to govern society from the top-down with an enormous system. It is, for various reasons impossible, and neither desirable and the task is better suited for the heuristics and principles humans act on independently.