Abstract concepts are too varied to consider collectively in one paper. My strategy here will be to consider one class of abstract concept within an empiricist framework. These are the moral concepts. More specifically, I want to consider the fundamental concepts of (morally) good and bad.
When we say that something is morally bad, we do not seem to be commenting on its appearance. We are not suggesting that it looks or tastes a particular way. There is an obvious retort to this. Moral truths reside not in the world, but in us. They are evaluations, not observations. This is, in a certain sense, exactly right, but it only highlights the problem. How can empiricism accommodate judgments that are not observation-ally grounded? The answer is that moral judgments are observationally grounded. They are grounded in observations of ourselves. To judge that something is morally bad is to recognize an aversive response to it. Morally relevant events cause emotional responses in us. We recognize that some event is morally significant by emotionally reacting to it in a particular way or by recognizing that it is similar to events that have stirred our emotions on other occasions. Emotions, I will suggest, are perceptions of our bodily states. To recognize the moral value of an event is, thus, to perceive the perturbation that it causes.
Minus the bit about the body, this is essentially what Hume (1739) proposed. He said that to judge that something is morally bad is to experience disapprobation, and to judge something good is to experience approbation. The resources of contemporary psychology provide an opportunity to build on Hume's suggestion. First, we know a lot more about emotions now, and second, we have tools for testing whether Hume's proposal is descriptively accurate.
The terms "approbation" and "disapprobation" are really just placeholders. It is unlikely that they name particular discrete emotions. There is no distinctive feeling of moral disapproval or approval. Recent evidence suggests that moral emotions may vary as a function of the forms of conduct that elicit them. This is most evident in the case of negative moral judgments. When we judge that an act is morally bad, the emotional experience depends on the nature of the act.
A striking demonstration of this owes to Rozin et al. (1999), who extended the work of Shweder et al. (1997). Shweder et al. had argued that moral rules fall into three different categories. Some involve autonomy. These are rules that pertain to individuals and their rights. Prohibitions against harming or stealing are autonomy prohibitions, because they are affronts to the rights of individuals. Other moral rules involve community. These pertain to the social order, including issues of ranking within a social hierarchy. Disrespect for the elderly, for social superiors, and for public property are all crimes against community. Finally, there are moral rules pertaining to what Shweder et al. call "divinity." These rules, which are less prevalent in secular societies, involve the divine order. Crimes of religious impurity are the paradigm cases of divinity rule violations. In secular societies, there are residual rules of divinity. Many of these concern sexual propriety. To engage in bestiality, for example, makes one impure. We tend to think of some acts as "unnatural." In general, I think rules of divinity can be thought of as rules that protect the natural order. Unnatural acts may not harm individuals or community resource; they are an affront to nature itself.
Rozin et al. used this taxonomy to determine whether violates of different kinds of rules elicits different emotions. They presented subjects with descriptions of a variety of different kinds of morally questionable conduct and asked them to report how they would feel towards the perpetrators. The pattern of responses was robust. Subjects were angered by those who violated autonomy rules, contemptuous towards those who committed crimes against community, and disgusted by those who committed divinity transgressions (such as people who performed deviant sexual acts). They conclude that there is a mapping from transgression-types to emotional responses. They call this the CAD Hypothesis, for Community/Contempt, Autonomy/Anger, and Divinity/Disgust.
Rozin et al.'s finding might be extended in a variety of ways. They show that emotions vary as a function of transgression-types. It is also quite clear that emotions vary as a function of who commits the transgression. In their study, the perpetrators are strangers. What happens when the perpetrator is the self? I think there are likely to be systematic interactions between transgression- and transgressor-type. If you violate an autonomy rule, you may feel guilty. If you violate a rule of community, you may be more likely to feel ashamed. When you violate a divinity rule, you may feel a combination of shame and self-directed disgust. There may also be affects of victim identity in the case of autonomy and community norms. If a stranger is the victim of a transgression, reactions may be less intense than if the self or a loved one is a victim. I would guess, however, that these differences are more quantitative that qualitative. It seems to be a distinctive feature of morality that we respond emotionally in cases in which we are not involved. This is the third-party nature of moral response. I am angry if one stranger harms another, even if I am unaffected.
The preceding observations can be summarized by saying that moral disapprobation takes on various forms. It is context sensitive. The same may be true for moral approbation. We may feel gratitude or admiration when a stranger does something good, and we might feel pride or self-righteousness when we do good ourselves. Disapprobation and approbation refer to ranges of emotions. In a word, they are "sentiments." Sentiments can be defined as dispositions to experience different emotions in different contexts (Prinz, 2004). For example, the sentiment of liking ice cream may involve feeling happiness when ice cream is obtained, sadness when it is unavailable, and craving when it comes to mind. Disapprobation and approbation are sentiments that can be defined as dispositions to experience various different emotions as a function of context.
The Rozin et al. study goes some way toward supporting this theory. It shows that people naturally and systematically associate emotions with moral transgressions. Further evidence for the role of emotions in moral conceptualization can be found elsewhere. Haidt et al. (unpublished manuscript) asked subjects to indicate whether they found certain forms of conduct morally objectionable. For example, they asked subjects to imagine a situation in which a brother and sister engage in consensual incest using effective birth control. When subjects report that this is wrong, they ask them to justify that evaluation. It turns out that people have difficulty providing reasons, and when reasons are provided they can easily be short circuited. When subjects reply that incest can lead to birth defects, the experimenters remind them that birth control was used. Eventually subjects give up on reasons and declare that incest is "just wrong," or wrong because it is disgusting. Emotions seem to drive their judgments. I suspect that much the same results would be obtained if subjects were asked about murder rather than incest. If asked why murder is wrong, it would be hard to provide an answer. If pushed, one might contrive to say that murder is wrong because it violates the victim's rights, but it would be hard for most people to articulate why it is wrong to violate rights, much less explain what rights are or where they come from. Some people might say that murder is wrong because one wouldn't want to be killed oneself, but this answer only explains why it's prudent to illegalize murder. Moreover, those who appeal to their own well-being when they justify prohibitions against murder are implicitly appealing to their emotions. We all think, "If someone tried to kill me, I would be outraged." Trained philosophers may be able to provide reasons for moral claims, but untrained moral reasoning seems to be emotionally grounded.
Further evidence for this conclusion comes from the literature on moral development (see Eisenberg, 2000, for review). The most effective means of training children to be moral are parental love withdrawal, power assertion, and the induction of sympathy through drawing attention to the consequences of action (Hoffman, 2000). All of these methods affect a child's emotional state. Love withdrawal is especially effective. It induces sadness in children by convincing them that they may lose the affection and support of their caregivers. Elsewhere I have argued that this becomes the foundation for guilt (Prinz, 2003). Guilt is a species of sadness directed at one's own transgressions. Eventually the sadness induced by love withdrawal is transferred to the action, and we become sad about what we have done, rather than being sad about the consequences. This transfer can be driven in part by mechanisms as simple as classical conditioning. Other moral emotions may be learned by emulating adults and by having experiences as the victim of moral transgressions. If a child is physically harmed by another child, anger is a natural result. Moral disgust can be transmitted by parental displays of disgust, bolstered perhaps by fear induction. Moral disgust is most likely to be inducible in the context of actions that involve the body, because it derives from nonmoral disgust, which is evolved to protect against contamination of the body (Rozin et al., 1993).
The claim that emotions are central to moral development and moral judgment is consistent with findings from neuropsychology. There is evidence that early injuries in areas of the brain associated with the top-down regulation of emotional response (ventromedial prefrontal cortex) lead to antisocial behavior (Anderson et al., 1999). There is also at least one documented case of antisocial behavior resulting from a ventromedial injury in adulthood (Blair & Cipolotto, 2000). These findings suggest that emotional impairments compromise moral cognition. Neuroimaging studies with healthy subjects are consistent with these results (see Greene & Haidt, 2002, for a review). Greene et al. (2001), for example, found that emotional centers of the brain were active when subjects were presented with moral dilemmas (e.g., "Would you kill one person in order to save five?"). Key areas were the superior temporal sulcus, posterior cingulate, and medial frontal gyrus. The authors found that these centers were especially active when subjects imagined themselves directly and physically harming another person, but emotional centers were engaged during all moral reasoning tasks. Moll et al. (2002) gave subjects a series of sentences describing events that were either morally significant, nonmoral but emotional, or neutral. The moral sentences, as compared to the neutral, caused greater activation in structures associated with emotional response (superior temporal sulcus, the temporal pole, and orbitofrontal cortex). The nonmoral emotional sentences caused activation in a different range of areas, but this may be a consequence of the fact that they elicited emotions that do not overlap significantly with the moral emotions evoked in the study. Many of the moral sentences concerned harms or injustices, whereas the nonmoral emotional sentences tended to concern things that are physically disgusting. On the CAD hypothesis, harm and injustice elicit anger, rather than disgust. In any case, the core finding of the study was clear. Moral cognition elicits activation in structures associated with emotion.
Future research is needed to determine which brain structures underlie which aspect of moral experience. The structures identified in the Moll et al. study overlap only partially with the structures identified by Greene et al. Different structures may be involved in different kinds of tasks. Careful work distinguishing different kinds of transgressions is also likely to reveal variable responses in the brain. These early studies support the view that emotions contribute to moral judgments.
Was this article helpful?