Moral decision-making doesn’t exist in a vacuum. It’s highly dependent on environmental pressures – like whether we have to think “fast” or “slow,” or how much we identify with the people involved. This piece is adapted and excerpted from a virtual presentation given to Temple Beth Israel in Eugene, Oregon by moral psychologist Dr. Joshua Greene on January 16, 2019.
Ruhi Rubenstein: Let’s start with this text from the Babylonian Talmud, so it’s about 1500 years old. This is the Masechet Gittin, the tractate dealing with divorce, actually, but as those of you who have any familiarity with Talmud know that Talmud tends to go off on tangents, so the particular issue we’re looking at here has nothing to do with divorce, but it has to do with the ambivalence – and I’ll say more after we look at this text – that an in-group may feel towards an out-group. And in this case, the in-group is the often minority Jewish population that the Talmud’s voices consider to be the “us.” So this text says:
“One does not protest against poor gentiles who come to take gleanings, forgotten sheaves, and the produce in the corner of the field, which is given to the poor […] for the sake of the ways of peace.” Our Rabbis taught: “We sustain the non-Jewish poor with the Jewish poor, visit the non-Jewish sick with the Jewish sick, and bury the non-Jewish dead with the Jewish dead, mipnei darchei shalom for the sake of peace.” (Gittin 61a)
I find this text particularly fascinating, because, as I said, I think it reflects a certain ambivalence. The text is not saying, “Everybody’s equal and that’s just the right thing to do.” The text is saying, “Well, you might have thought that it would be natural that we would not extend ourselves toward the non-Jewish poor, and that we would not visit the non-Jewish sick, and we would not bury the non-Jewish dead, but we do these things. Or if a non-Jew wants to partake of our tzedaka, we don’t tell them they can’t, for the sake of the ways of peace.”
And whenever this construction, mipnei darchei shalom, appears in Talmud, it’s a specific value about how to live in a pluralistic society in a way that we don’t wind up harassing our neighbors, and our neighbors don’t wind up harassing us.
So there is a distinct sense in this text that Jews are an “us”, and Gentiles are a “them,” and yet, for the sake of being able to coexist in society together, there are certain concessions that we make to each other. We take care of each other, and it’s not necessarily because we assume it’s the right thing to do, but we assume that it will help us live in a more convenient way with less hassle. And this is a particularly interesting illustration of the nexus of where individual interest, communal interest, universal interest, come into play all together. It’s in every Jew’s individual interest for there not to be another pogrom, right. It’s not going to be good for any of the Jews if the Gentile neighbors come attacking. (A pogrom is the Russian Yiddish term for when the Christian neighbors would get incited to go massacre their local Jewish neighbors).
So there is an individual interest to live in peace with your neighbors, but it is also in the interest of our neighbors that we all treat each other more or less equally, and that we all extend the same kindness and generosity to each other. So I understand that our talk tonight will be dealing with some of the problems of how the individual relates to the commons, and how we construct ideas of us vs. them, which is certainly an older problem than the Talmud, even. But this text, I hope, illustrates some of the tensions inherent in that question, and some of the ways that question can be moot – that the interests of us, are the interests of them, are the interests of the individual. But it is rarely that simple. And there is often some ambivalence. So I hope that has been an appropriate introduction for you, Dr. Greene, and I’m going to invite Dr. Slovic to introduce you.
Paul Slovic: Dr. Joshua Greene is one of the world’s authorities on moral psychology. And what I really like about his work is his blending of the experimental research with philosophical validation of these same ideas, and their application to problems that we face in the world. He’ll do that tonight. So I think we have a real treat in store.
He is professor of psychology and member of the Center of Brain Science at Harvard University. For over a decade, his lab has used behavioral and neuroscientific methods to study moral judgment, focusing on the interplay between emotion and reason in moral dilemmas. His more recent work examines how the brain combines concepts to form thoughts, and how our thoughts are manipulated by our reason and imagination. Other interests include conflict resolution and the social implications of advancing artificial intelligence. He is the author of this book, “Moral Tribes: Emotion, Reason and the Gap Between Us and Them.” So I turn it over to you, and thank you very much, Josh.
Joshua Greene: Well thanks, both of you, for your really, really kind and thoughtful introductions.
I’m now going to give a very brief, and highly opinionated, and very selective overview of human morality. I’ll pick out some points of interest and places where our morality, on the one hand, seems to be failing us, and then other ways seems to be serving us well. And so that’s why I call this talk “Human Morality: Features and Bugs.”
Our story begins with a famous story, Garret Hardin’s Tragedy of the Commons. We have these sheep herders, and these herders think to themselves, “Should I add more sheep to my herd?”, and they think “well, they’re just grazing on this common pasture, so it’s no cost to me, and when I have more sheep I can take more sheep to market and make more money. That’s a good thing.” Pretty soon there are so many sheep that the pasture can’t support any of them, and the pasture starts to die.
So I think the Tragedy of the Commons nicely illustrates what I think of as the fundamental tension of social existence, which is the tension between me and us, between what’s good for the individual and what’s good for the group, or tribe. In a nutshell, morality is about solving the problem of finding a way for people to be not just about me but at least to some extent about us, or about other people and what they care about.
So the question is: how does this work on a psychological level, inside the head? So my preferred metaphor for the human mind’s decision making is a camera with manual and automatic settings. I have a camera that is able to adapt to almost any situation and take a usable picture, but on those rare occasions when I want to do something fancy, I can put the camera in manual mode and adjust the F-stop and everything by hand, and depending on exactly what I want and exactly what the circumstances are, I can get exactly the photograph that I want.
You can think of these automatic settings versus the manual mode as essentially being like intuition and reason or, you might say, an emotion, or certain kinds of “fast” emotions, and reason. And many of you will be, or at least some of you may be familiar with, the landmark work of Daniel Kahneman and Amos Tersky, and Kahneman’s book “Thinking, Fast and Slow.” It’s the same idea. We have a gut reaction to things that is like those automatic settings. And this is good for making some decisions in typical situations – they give you the right answer most of the time.
But then we also have a capacity for deeper reasoning, which you can think of as our “manual mode.” And that’s especially good for situations in which our gut reactions are not necessarily well-trained to give us the right answer. They allow us to think things through in a more rigorous kind of way. And so I’m going talk a little bit about what we’ve learned about how the “automatic” settings – our intuitive emotional responses on the one hand, and our reasoning on the other – operate, sometimes cooperate, and are sometimes at odds with each other, in moral thinking.
So, the laboratory version of the Tragedy of the Commons is called the public goods game. Four people to come into the lab and they’re each given $10 that they can keep or put into a common pool. Whatever goes into the common pool gets doubled by the experimenter and equally divided among all four players. So if you do the math, you realize that if you only do this once, no matter what the other players do, you come out ahead. But if you want to maximize the payoff for the whole group, then you can put all of your money in, and that maximizes the amount that is productively doubled when everything goes into the pot.
We asked people to make a decision about whether to do the we thing, which is to keep their money, or the us thing, which is to put their money into the common pool so that it can grow for everybody. We had a group that could take as long as they wanted to answer, another where they had to answer in less than 10 seconds (the idea there was to promote a kind of intuitive response), and a third where people had to take at least 10 seconds. So we slowed people down to try to get a reflective response.
So we found that when you put people under time pressure, they contribute more. And when you make people go slow – our “manual mode” – they contribute a bit less.
So it seems that for any group we’ve tested, when you try to get people’s intuitive responses, they tend to be prosocial, giving to the group, whereas if you make people slow down, making them rely more on “manual mode,” then they’re more likely to be selfish.
You might look at this and say, “Ah, so this means that people are innately good or intuitively good,” but I think it’s actually more complicated than that. People who said they trust people, who engage with them in daily life – when they think faster, they contribute more. People who say “I don’t trust the people I interact with in my daily life,” they don’t show any difference in their contribution between when they go fast and when they go slow.
So our social emotions are really designed to help us solve the Tragedy of the Commons. We may have positive feelings that motivate us to think about the interests of others – so if these herders on the pastures are all friends or all members of the same tribe, emotionally and not just formally, they’ll say, “Sure, I’m happy to restrain my herd.” But there is also a negative counterpart to this. If all the other herders were to restrain themselves and I were to be greedy, then I would feel guilty, right. So we have these emotional “carrots and sticks” that we apply to ourselves, and that motivate other people. If you restrain your herd, you’ll have my gratitude. And if you don’t do that, then you’ll have my anger, my contempt, perhaps even my distrust. All these seemingly unrelated feelings are actually allowing people to intuitively be motivated to live together.
So all of these seemingly unrelated feelings actually are in different ways enabling people to live together intuitively, motivating them to think – not just about themselves, but about other members of their group. So those are “automatic settings.”
The size of this circle of cooperation varies, and when it is too narrow, this is a way that our moral intuitions can fail us in the modern world. We fail to take advantage of the possibility for cooperating with a larger group of people when our instincts tell us to circle the wagons and only trust the people who we have a tighter kind of connection with.
But what about “manual mode”? The best way to see that is when it’s in tension with our automatic settings, and that brings me to a topic near and dear to my heart – “trolleyology.”
In this classic moral dilemma, there is a trolley headed toward five people, but you can hit a switch that turns the trolley away from the five people and on to one. If you ask most people if it’s OK to hit the switch so that the trolley only runs over one person instead of five, they’ll say “yes.” So what’s going on there? The theory is that people have a conscious, explicit manual mode thought that says, “Well, it’s better for five people to be alive than one person.” In this case, there’s not a very strong emotional reaction to the action, and as a result people tend to say yes. And you can call this a utilitarian response. But that just basically means going with the consequences.
Now this other dilemma, known as the footbridge dilemma, goes like this: the trolley is headed towards these five people, and this time the only way you can save them is to push this big guy off of a foot bridge. He will land on the tracks and get crushed by the trolley and die, but the other five people will be safe. And so the question is: now is it OK to save the five by killing one?
Again, everybody has the thought, if they’re consciously aware, that it would make a certain kind of sense to do this. Five lives, say, is better than one life saved. But you’ve got an emotional response to this action of pushing this guy off the bridge that makes you say no, it’s wrong. And then you have this manual-mode reasoning that says, “But isn’t it better to save more lives rather than fewer?” And those things conflict in this moral dilemma.
We had people think about these cases, and sometimes we’d say, “Don’t tell us what’s right or wrong, just tell us what what would produce the best consequences.” Other times we’d say, “Don’t tell us what’s right or wrong, just tell us which action you would feel worse about.” And then in other cases we’d ask “OK, tell us, all things considered, which is the right thing to do?”
When we ask people to give an assessment about consequences, you see activity in a part of the brain called the dorsolateral prefrontal cortex, an area generally associated with reasoning and the kind of effortful thinking.
When we ask people to just make an emotional assessment, we see slightly increased activity in the amygdala, which kind of sounds an emotional alarm call. When people made those assessments, the stronger the signal in their amygdala, and the worse that they felt about their actions.
During an “all things considered” judgment, you see more activity in the ventromedial prefrontal cortex. And when we look at how these signals relate to each other, it looks like the amygdala signaling “no, don’t do that horrible thing,” and the dorsolateral prefrontal cortex signal essentially saying, “But wait, isn’t it better to save five lives instead of one?” They both are bearing down on the intermediate prefrontal cortex. And that’s what the enables you to make this sort of judgment.
One device I like is the veil of ignorance, as explored by the philosopher John Rawls. It’s a kind of impartiality, where you’re treating others the way you want to be treated. He incorporates that idea into a more formalized decision problem, where he says that a just society is one where you would choose to live in if you were being selfish, if you didn’t know who in that society you were going to be – male or female, talented or untalented, etc. Imagining you were one of the five people on the tracks in the footbridge problem, which option would you choose? Researchers found that doing the veil of ignorance exercise made people more impartial, and they were more likely to select the utilitarian option of killing one person to save five, since that would give them a 5 out of 6 chance of surviving.
Having people go through this philosophical thinking exercise makes them less tribalist by turns, and more likely to think more broadly about the greater good, even if that means going outside of the tribe.
So I think the modern moral problem is the “tragedy of common-sense morality,” which is like the tragedy of the commons, but one level up. How do you get a bunch of individuals with selfish tendencies to live together, at least somewhat happily, as a group? The modern moral problem is about a bunch of different “us”-es who need to live together in this larger world.
(This post is part of Sinai and Synapses’ project Scientists in Synagogues, a grass-roots program to offer Jews opportunities to explore the most interesting and pressing questions surrounding Judaism and science. This was from a talk delivered virtually at Temple Beth Israel in Eugene, Oregon).
0 Comments