In The Moral Mind: How five sets of innate intuitions guide the development of many culture-specific virtues, and perhaps even modules (1) Haidt describes moral foundations as analogous to “innate ‘taste buds’ of the moral sense,” saying “The taste buds on the tongue gather perceptual information (about sugars, acids, etc.) whereas the taste buds of the moral sense respond to more abstract, conceptual patterns (such as cheating, disrespect, or treason). Nonetheless, in both cases, the output is an affectively valenced experience (like, dislike) that guides subsequent decisions about whether to approach or avoid the object/agent in question.”
I think that’s fine, as far as it goes, but I think the taste bud analogy does not offer a complete and accurate characterization of moral foundations, or of morality. The analogy is too passive; explaining only our likes and dislikes – our moral “taste palate,” if you will. I think moral foundations, and morality, are much more than that. I think they’re also used proactively by our Rider/Lawyer in our attempts to persuade others to see and do things the way we think they should be seen and done. I think moral foundations are the bulding blocks not only of our likes and dislikes, but also of our attempts to affect change, and to convince others that our way is best. Haidt’s second principle of Moral Psychology is “moral thinking is for social doing.” Moral foundations are tools of reason and “social doing,” they are the weapons of the culture war.
But there’s an important clarification that must be made regarding “reason” which is critical to our understanding of the political divide and the role moral foundations play in it.
It is well documented that human reason is fallible, and that’s putting it mildly. We like to think that reason is for finding the truth, but the fact of the matter is that, except in rare circumstances in which people work together in groups to test ideas and where the members of the group do not have a vested interest in the outcome (roughly speaking, the scientific method and community), reason is actually quite poor at finding truth.
A recently developed theory of social science researchers Hugo Mercier and Dan Sperber postulates that “Reasoning was not designed to pursue the truth. Reasoning was designed by evolution to help us win arguments.”
In their paper Why do humans reason? Arguments for an arumentative theory Mercier and Sperber say that “The evidence reviewed here shows not only that reasoning falls quite short of reliably delivering rational beliefs and rational decisions. It may even be, in a variety of cases, detrimental to rationality. Reasoning can lead to poor outcomes, not because humans are bad at it, but because they systematically strive for arguments that justify their beliefs or their actions. This explains the confirmation bias, motivated reasoning, and reason-based choice, among other things.”
Their ideas are summarized in The Argumentative Theory – A Conversation with Hugo Mercier, on Edge.com. Here’s an excerpt:
“And the beauty of this theory is that not only is it more evolutionarily plausible, but it also accounts for a wide range of data in psychology. Maybe the most salient of phenomena that the argumentative theory explains is the confirmation bias. Psychologists have shown that people have a very, very strong, robust confirmation bias. What this means is that when they have an idea, and they start to reason about that idea, they are going to mostly find arguments for their own idea. They’re going to come up with reasons why they’re right, they’re going to come up with justifications for their decisions. They’re not going to challenge themselves. And the problem with the confirmation bias is that it leads people to make very bad decisions and to arrive at crazy beliefs. And it’s weird, when you think of it, that humans should be endowed with a confirmation bias. If the goal of reasoning were to help us arrive at better beliefs and make better decisions, then there should be no bias. The confirmation bias should really not exist at all. We have a very strong conflict here between the observations of empirical psychologists on the one hand and our assumption about reasoning on the other. But if you take the point of view of the argumentative theory, having a confirmation bias makes complete sense. When you’re trying to convince someone, you don’t want to find arguments for the other side, you want to find arguments for your side. And that’s what the confirmation bias helps you do. The idea here is that the confirmation bias is not a flaw of reasoning, it’s actually a feature. It is something that is built into reasoning; not because reasoning is flawed or because people are stupid, but because actually people are very good at reasoning — but they’re very good at reasoning for arguing.” (2)
I believe that Moral foundations define more than just our inner elephant. They are also the cognitive tools used by rider/lawyer for reasoning and arguing. They are the constructs upon which our logical arguments are built. They are the tools the conscious rider/lawyer uses to rationalize the elephant’s visceral reaction to like or dislike. They are the building blocks of the arguments the rider uses to try to persuade others to see things our way. Paraphrasing Haidt’s Second Principle of Moral Psychology, Moral Foundations are for social doing.
(1) Haidt, J., & Joseph, C. (2007). The moral mind: How 5 sets of innate moral intuitions guide the development of many culture-specific virtues, and perhaps even modules. In P. Carruthers, S. Laurence, and S. Stich (Eds.) The Innate Mind, Vol. 3. Available on the Publications page of Haidt’s web site MoralFoundations.org, and in MS Word, here.
(2) The Argumentative Theory, on Edge.com