In this post, I intend to suggest helpful improvements to the Consequentialism FAQ by Scott Siskind. I recommend it. If I do not quote and try to “improve” a passage, you may assume I agree with it (more or less). I consider myself a consequentialist, though probably not a utilitarian. (Perhaps I follow a peculiar version of rule utilitarianism? Basically, I concede most of the concepts behind the various “this proves utilitarianism” lifeboat scenarios, without committing to generalizations from such edge cases. Hopefully this will clarify itself as I proceed.) I think the FAQ is aimed at people more or less like me, so maybe the author would appreciate my suggestions. Quotes from the FAQ below are in italics. I will go through the FAQ in sequence.

0.4: And who cares?
The problem isn’t that people aren’t trying to be moral, it’s that they’re no good at it. This FAQ tries to explain how to do it better.

Siskind is pointing out how strongly morality motivates people. He is right. He implicitly blames many global problems on deontology or disbelief in consequentialism, but this does not follow. The problem could just as easily be that people are trying to be consequentialists, but they’re no good at it.

This FAQ does a great job of explaining some abstract principles. But after reading it, I am no wiser about how to “do it better.” I’m not sure whether the concrete advice and realistic examples should be sprinkled through the text, or appended to the end. But the FAQ would benefit from more examples of things I can actually do. All I can find is two links, one of which leads to a private blog I cannot read. The other link leads to an interesting blog entry that makes an abstract point that can be contorted into sort of an example of how to use cost/benefit analysis in a context far from my experience. So, more relevance, please.

1.3: Can we just accept all of our moral intuitions as given?
This section discusses “a reflective equilibrium among our various moral intuitions.” I think we can summarize this as, some of our intuitions may contradict each other, and we may find reasons for thinking an intuition is mistaken. So we need to adjust them.
This section seems natural as a location for some hand-waving in the direction of recent ideas about how the neurology of moral intuition works, and the role it played in our evolutionary past. Is it possible to summarize The Righteous Mind by Jonathan Haidt in one paragraph and have it sound convincing? Haidt’s book made me step back from some attitudes that Siskind still seems attached to. I hope this will clarify itself below.

2.2: Why [must morality live in the world]?
Rather than the parable of the heartstone (a magic fetish that removes evil from your actions, without changing any of the consequences), I’d prefer a nice concrete example.
Morality should make a difference in the world, according to Siskind. I agree. But all moral systems label action good or bad, and if people succeed in sticking to their chosen moral system, the world will differ from a world where they stick to some other system or fail to stick. So the FAQ entry mis-states or understates the point. I think he meant to say, we should adopt the moral system we think will bring about the best outcome. Maybe I lack imagination, but I am not thinking of an example of a moral system that has no implications for how people should behave. I am having trouble imagining a morality that doesn’t make a difference in the world, though I do imagine that different moralities have different effects. Even wearing green clothes on Saturday has some implications and so will change the outcome for the world, it just seems unlikely to make any important difference. Not eating pork or shellfish? It used to have some health implications, perhaps still does. Not mixing fibers? People who used to take the metaphysical position can always wave their hands and claim some indirect social ill or benefit will result as a consequence.
I don’t think too many people will convict themselves of metaphysical thinking.  But if they claim some dire social cost, we get to test it. Isn’t that where this line of argument should go?

2.62:
This section contains an elaborate metaphor involving phlogiston. I found this metaphor difficult to follow, and unhelpful. I suspect the author felt it was clever, but does it get the job done?

3.4: What do you mean by “warm fuzzies”?
This term refers to the happy feeling your brain gives you when you’ve done the right thing. Think the diametric opposite of guilt.
But just as guilt is not a perfect signal, neither are warm fuzzies. As Eliezer puts it, you might well get more warm fuzzy feelings from volunteering for an afternoon at the local Shelter For Cute Kittens With Rare Diseasess than you would from developing a new anti-malarial drug, but that doesn’t mean that playing with kittens is more important than curing malaria.
If all you’re trying to do is get warm fuzzy feelings, then once again you’re assigning value only to your own comfort and not to other people at all.

“Diseases” is misspelled.
This overstates the case. There are two issues: An imperfect metric is still better than no metric, and motivation is important in accomplishing social goals. Rejecting warm fuzzies is like rejecting sensory perceptions because of the possibility of sensory illusions and misperceptions. Both call for correction, not rejection. And warm fuzzies provide motivation. Isn’t it better to throw yourself wholeheartedly into healing little girls’ kittens, than doing a crap job of drug research? In any case, appreciating warm fuzzies does not imply you assign no value to other people. I would appreciate a link to the location within the FAQ (or elsewhere) where someone explains what we use as motivation instead of guilt and warm fuzzies. Money and threats come easily to mind, but hardly recommend themselves. As Siskind himself points out, morality is a powerful motivator, and warm fuzzies make it work. If the alternative is a careful and comprehensive cost/benefit analysis, I’d like to see the cost/benefit analysis of the cost/benefit analysis, because I don’t think it’s going to pay back the investment.
We have to make a lot of strong assumptions about humans, assumptions that are currently not accurate, for the more ambitious sorts of utilitarianism to make sense. In the meantime, it seems possible to devise a distributed system with desirable emergent properties out of unreliable components, but you can’t just ignore their limitations. Guilt and warm fuzzies are built in, and whatever solution involves humans will make use of these as a feature or most likely fail.

3.7: Are you sure it’s ever possible to value other people? Maybe even when you think you are, you’re valuing the happy feelings you get when you help other people, which is still sorta selfish if you think about it.
[…] Someone who uses a guilt-reduction or signaling-based moral system will end up making harmful decisions

I think this means they will not make the best decision. This again assumes we can ignore our limitations, but for now, I’d like to see a better explanation of this, and especially why we should treat guilt-reduction and signaling as equally flawed, or a clarification that we should not. Siskind has convinced me that signaling is counterproductive, but not the rest.

3.8: Does this mean morality is equivalent to complete self-abnegation?
No. Assigning nonzero value to other people doesn’t mean assigning zero value to yourself. I think the best course of action would be to assign equal value to yourself and other people, which seems nicely in accord with there being no objective reason for a moral difference between you. But if you think other people are only one one-thousandth as important as you are, that won’t change the rest of this FAQ except requiring you to multiply certain numbers by a thousand.

Assuming any numeric measure of relative value has some problems. What thought does Siskind hope to express with this?

4.21: Philosophers call this position “consequentialism”, and when it’s phrased in a slightly different way the majority of the human race is dead set against it, sometimes violently.
This makes me curious what phrasing he has in mind, but he doesn’t say. Cheater!

4.3:[trolley problem]
The trolley problem is the famous lifeboat scenario where you must choose to save 5 people by killing 1, or by inaction allow the 5 to die. I wrote a long rant about what the story leaves out, but it is beside the point. Short version: the strong utilitarian approach makes unrealistic assumptions about humans. The person making a choice must know what choices she faces, know a great deal about the outcomes associated with the choices, and process this information quickly, reliably, and accurately. Humans need a great deal of support and training before they can approach this ideal. They may never truly achieve it.
What is the trolley story really trying to say? People do exist whose decisions change the fate of many persons, and indirectly decide who will live and who will die, who will suffer and who will be spared. When designing a highway overpass, engineers must balance safety, cost, and usability. They need not push a fat man in front of a train, but indirectly they do decide who lives and dies.
But the trolley story distracts us from the real lesson. The problem is, the real lesson is pretty subtle, almost ambiguous. There are cases where people will fail by solving small scale problems on an ad hoc basis, all of which could dissolve in an elegant solution devised by some brilliant insight. Can we just just assume that brilliant insight will save the day? What seems like brilliant insight to imperfect humans may cause unforeseen problems. Perfection is not an option, and sometimes we must learn by trial and error, by recovering from a mistake instead of foreseeing it. Given that, our reluctance to push the fat man makes sense.

5.3: Can utilitarianism do better?
Yes. Preference utilitarianism says that instead of trying to maximize pleasure per se, we should maximize a sort of happiness which we define as satisfaction of everyone’s preferences. […] In theory this is difficult, since it’s hard to measure the strength of different preferences, but the field of economics has several tricks for doing so and in practice it’s usually possible to come up with an idea of which choice satisfies more preferences by common sense.

I’d like a cite for the economists’ tricks. Maybe my bias is showing, but I get a bad smell from this.  Weren’t we complaining about imperfect metrics above in the section about guilt and warm fuzzies? This seems like hand waving. 
Speaking of economics, doesn’t that “satisfaction of everyone’s preferences” sound sort of familiar? Isn’t that how economists talk about competitive markets? People who satisfy other people’s preferences get rewarded by having their own preferences satisfied, and your preferences can get lots of satisfaction by innovating new ways to satisfy preferences, and preference satisfiers that don’t satisfy preferences well go out of business. Maybe we should consider using mechanisms that reveal persons’ preferences, and help them learn better ways to satisfy them? A big, error-tolerant, ever-learning, pluralistic, inclusive, socially evolving distributed system of preference seeking, doesn’t that sound better than “assume a utilometer and a supercomputer the size of the Sun”?

5.31: Can utilitarianism do even better than that?
[…] use ideal preferences – what your preferences would be if you were smarter and had achieved more reflective equilibrium.

I’m not sure whether there is a better way to say that or if it is just inevitably creepy. Maybe “anticipate what your preferences would be under ideal conditions and after you gained more wisdom, experience, and learning?” Still sounds ominously like “We’ve decided this is what you like now, if you want to continue liking things!” What do we owe people who want to opt out? If you can teach me to genuinely prefer something that improves my long-term prospects, that’s great. If you’re just going to decide what I ought to prefer and give it to me regardless, in spite of my conscience or my actual preferences, that is not a selling point.

6.2: But what do you mean when you say these sorts of heuristics aren’t not always true?
Note the typo in the subheading.
This implicitly solves the problem of explaining what it means for a norm to be true. For Siskind, any norm implicitly claims “the world will improve if you X.” So, a norm is true if indeed following it will improve the world, and it is false if it actually makes the world worse off in this circumstance.

6.3: So it’s okay to lie or steal or murder whenever you think lying or stealing or murdering would make the world a better place?
When society goes through the proper decision procedures, in most cases a vote by democratically elected representatives, the government is allowed to steal some money from everyone in the form of taxes.

How do we test the properness of the “proper decision procedures”? This limitation seems to concede our inability to make pure utilitarianism work, and therefore the need to supplement it with other restrictions, principles, and institutions.
What does utilitarianism say about the role of government? Are we to believe that the U.S. Federal government reliably produces the best possible outcome, and satisfies everyone’s preferences, actual or ideal? Or does pure utilitarianism demand that we replace (or at least vigorously reform) the government? Or do we get to use a lower standard in that case, picking the best of a bad lot?

6.4: So is it ever okay to break laws?
I think civil disobedience – deliberate breaking of laws in accord with the principle of utility – is acceptable when you’re exceptionally sure that your action will raise utility rather than lower it.
To be exceptionally sure, you’d need very good evidence and you’d probably want to limit it to cases where you personally aren’t the beneficiary of the law-breaking, in order to prevent your brain from thinking up up [typo in original] spurious moral arguments for breaking laws whenever it’s in your self-interest to do so.

Or, like King and Gandhi, make sure that the authorities (and as many other persons as possible) know when, where, and why you will be breaking the law. Gandhi technically benefited from his illegal manufacture of salt, but no one cares. The point of Gandhi-style civil disobedience is to force the authorities to arrest you, hopefully making them look bad in the process, or making them look feckless if they decline to arrest you, gaining publicity and support for your cause in either case. If you’re engaged in credible civil disobedience, no one will suspect you of trying to get away with something.  
Disobeying the law surreptitiously can sometimes have beneficial political consequences, but in that case we do not refer to it as civil disobedience. Two potheads smoking a joint in their living room are not engaged in civil disobedience. Maybe that is civil indifference.

6.5: What about human rights? Are these also heuristics?
I should probably says something here, as I don’t completely agree and I think this section could be improved. I’m having trouble delivering the goods without turning it into a long-winded rant. I think the basic problem is that different people define “rights” differently, and talk past each other. We can join Siskind in talking about rights his way, and talk past his opponents, or oppose him and talk past him. How do we get the two sides to engage?

6.6: Summary?
Rules that are generally pretty good at keeping utility high are called moral heuristics. It is usually a better idea to follow moral heuristics than to calculate utility of every individual possible action, since the latter is susceptable to bias and ignorance. When forming a law code, use of moral heuristics allows the laws to be consistent and easy to follow. On a wider scale, the moral heuristics that bind the government are called rights. Although following moral heuristics is a very good idea, in certain cases when you’re very certain of the results – like saving your friend from an axe murderer or preventing someone from shouting “Fire!” in a crowded theater – it may be permissible to break the heuristic.

Susceptible is spelled with an ‘i’.
Actually, I can’t improve much on this, but I want to point out how much it conflicts with the tone of the rest of the FAQ. The FAQ embraces inconsistency and complexity (and risk?) in pursuit of the best possible outcome. This seems to admit that we might not succeed in achieving the best possible outcome, that we might need to settle for second best, learning from our mistakes, experimenting, and covering our asses (“this really ought not to explode, but just to be safe, let’s go sit in the blockhouse during the test flight.”). Maybe we could approximate morals by thinking of heuristics that limit all of us, not just the government. Does that sound too much like deontology? I don’t think so.

7.9: Doesn’t utilitarianism sounds a lot like the idea that “the end justifies the means”?
The end does justify the means. This is obvious with even a few seconds’ thought, and the fact that the phrase has become a byword for evil is a historical oddity rather than a philosophical truth.

This is philosophical macho flash. Innocuous means do not need justification. I suppose that anyone who would make exceptions for emergencies is indeed embracing this principle in a consequentialist way. But isn’t there a more convincing way to say it?
Does this section contain the most serious challenges to and counter-arguments against consequentialism? What controversies split the ranks of consequentialists? The list of problems here mostly seem like softball pitches.

8.1: If I promise to stay away from trolleys, then does it really make a difference what moral system I use?
Yes.
The majority of modern morality is a bunch of poorly designed attempts to look good without special consideration for whether they screw up the world. As a result, the world is pretty screwed up. Applying a consequentialist ethic to politics and to everyday life is the first step in unscrewing it.
The world has more than enough resources to provide everyone, including people in Third World countries, with food, health care, and education – not to mention to save the environment, prevent wars, and defuse existential risks. The main thing stopping us from doing all these nice things is not a lack of money, or a lack of technology, but a lack of will.
Most people mistake this lack of will for some conspiracy of evil people trying to keep the world divided and unhappy for their own personal gain, or for “human nature” being fundamentally selfish or evil. But there’s no conspiracy, and people can be incredibly principled and compassionate when the opportunity arises.
The problem is twofold: first that people are wasting their moral impulses on stupid things like preventing Third World countries from getting birth control or getting outraged at some off-color comment by some politician. And second that people’s moral systems are vague and flexible enough that they can quiet their better natures by saying anything inconvenient or difficult isn’t really morally necessary.
To solve those problems requires a clear and reality-based moral system that directs moral impulses to the places they do the most good. That system is consequentialism.

Different people have different candidate explanations for why the world is screwed up.  Has the FAQ really made the case that it’s because people aren’t sufficiently consequentialist? Should it? 

“Lack of will” seems wrong. Lack of a faster, more effective, more creative strategy? (Food, health care, education, etc. have been improving for a long time, so it is the pace of improvement, if anything, that earns our complaints.) Siskind is trying to say that morality motivates people powerfully, and so it seems possible that the pace of change could accelerate if we found ways to coordinate the moral impulses  of all humanity. I think consequentialism can help with this task, but it can’t provide necessary or sufficient conditions on its own.

8.2: How can utilitarianism help political debate?
In an ideal world, utilitarianism would be able to reduce politics to math, pushing through the moralizing and personal agendas to determine what policies were most likely to satisfy the most people.
In the real world, this is much harder than it sounds and would get bogged down by personal biases, unpredictability, and continuing philosophical confusions. However, there are tools by which such problems could be resolved – most notably prediction markets, which can provide a mostly-objective measure of the probability of an event.
There are many cases in which the consequentialist thing to do is to be very wary of consequentialist reasoning – for example, we know that centrally planned markets have bad consequences, and so even if someone provided a superficially compelling argument for why a communism-type plan might raise utility, we would have to be very skeptical. But a more developed science of consequentialist political discourse would aid us, not hinder us, in making those judgments.
*

I agree. But again the tone has changed, from casual confidence to a more realistic skepticism. How many ideal attributes must the ideal world possess before we can reduce politics to math? How many compromises must we make before the entire concept shrinks to become just another tool in the toolbox, along with empiricism, pluralism, skepticism, antifragility, evolution, and humility?

Links in the FAQ should not lead to a gated community with a lock on the gate.

Does the FAQ accomplish its goal? As an argument against deontology, I think it works. But does it show us “how to do it right?” Not really. Consequentialism is part of a social philosophy, it can’t do the job on its own.