As we discover more and more about the brain, will neuroscientific “explanations” about moral behavior become “excuses”? How “free” are we, and how would we even know?
We spoke with Professor Nathaniel Daw of the Princeton Neuroscience Institute and Department of Psychology, about the limits of free will and how we might judge moral behavior with our newly acquired knowledge of genetics.
This program is part of the project “Science Education for Jewish Professionals,” a series of webinars run in partnership the American Association for the Advancement of Science Program of Dialogue on Science, Ethics and Religion, in partnership with Sinai and Synapses, hosted by Clal – The National Jewish Center for Learning and Leadership, and funded by the John Templeton Foundation.Read transcript
Rabbi Geoff Mitelman: Welcome, everyone, to the third of three webinars on Science Education for Jewish Professionals. My name is Rabbi Geoff Mitelman, I’m the founding director of Sinai and Synapses, an organization that bridges the worlds of religion and science. We’re housed at CLAL, the National Jewish Center for Learning and Leadership. We’re also the hosts of this webinar.
This is a series of three webinars with scientists to be able to explore how Jewish professionals, rabbis, and cantors, and educators can use scientific knowledge in their own work. This is in partnership with the American Association for the Advancement of Science Dialogue on Science, Ethics and Religion. It’s part of a larger initiative that they are running to help clergy people and future clergy use science and understand science in different kinds of ways.
This afternoon we’re going to be talking with Professor Nathaniel Daw, looking at the question “Is neuroscience undercutting moral responsibility?”. Earlier, we did programs and webinars with Professor Sara Seager of MIT, asking “Are we still special if we are not alone?”, and two weeks ago we talked with Professor David De Steno of Northeastern University, asking and talking about the science of G’milut Chassadim. You can see both of those webinars as well on our website and on YouTube.
But in a moment I’m going to turn it over to Professor Daw, who will share about some of his work about neuroscience and the questions of free will, and how much control we actually have over our decisions, and what that means for our own sense of morality. For right now, I want to turn it over to Professor Nathaniel Daw of Princeton University to be able to talk about the question “Is neuroscience undercutting moral responsibility?”.
Nathaniel Daw: Great, well thanks so much for inviting me to do this. I think it could be an interesting conversation because, to be completely honest, questions of free will and morality and so on don’t really come up in my work on a day to day basis, I must say, and so I’m hoping to tell what I do know that might be pertinent to these topics, and then by way of conversation, actually open the issue at hand. So I apologize for dodging the nominal questions here. But responsibility is not something that we normally talk directly about in neuroscience. We do talk about a couple of questions which I think are quite related.
So one is if, you know, as biologists, we think of human beings as being evolved and selected. How is it that we’re not, you know, perfect, or why is it that we make what seem to be errors, why don’t we always do the correct decisions or the best decisions? And why, particularly, do we have the experience of doing things that we really know we shouldn’t have, or we shouldn’t do, that they aren’t the best?
And I think to get a feeling for what this is for how to answer these questions and for how to talk about these questions, which I hope are related to the question that I’m invited to discuss here today, I’d like to talk about, you know, the more basic question of what do we know from neuroscience about where our behaviors come from, where decisions come from. And I think to preview the answer, the answer is lots of places, there’s lots of competing impulses that guide our movements and our actions, and it is in this competition, I think, that these issues of control and perhaps ultimately, responsibility play out.
So another thing we don’t often do in the sciences is consult ancient texts, but I thought I might bring one up in honor of the circumstances. So this is a quote from Plato, which I think is quite relevant. I think Plato is probably more popular among Christians and Jews, but in any case, he famously spoke of the soul and likened it to a team of horses. And the team of horses, more particularly two of them, I think, one was more noble and good-willed and responsive to verbal commands, and the other was beastly and animalistic, and the job of the soul to drive these horses and keep them under control. And of course this is an old idea which clearly goes back, at least, to Plato but I think continues to dominate our thinking, and even our introspection about these kind of issues. And I think there’s a certain truth to it, and I’ll try to unpack a little bit about what we know about neuroscience of the sort of multiplicity of behaviors and of control.
So I think that the single message of neuroscience on this topic, again, is that there really is a multiplicity. So for instance, you know, at the simplest level we have reflexes, right, so if you touch a hot object, your hand will withdraw, your muscles will contract, and you’ll pull away. And that is subserved by events that happen in your spinal cord, actually doesn’t even make it up to your brain. So the nerve triggers a motor nerve that retracts your hand without anything in your brain ever becoming involved in this cycle, this circuit. And so there are low level behaviors, even ones that appear to be quite purposive, and you know, reasonable, that are not in any sense produced by what you would call a decision or deliberation or volitional, right. And indeed psychologists have tried to distinguish, partly because they’re interested in studying animals, what would we really want to know of a behavior to describe it as really volitional or deliberative or purposive, teleological in some sense.
And so one definition that’s been suggested of this, and this is due to Tony Dickinson, a famous psychological theorist, is that there are sort of two things we would ask, if you see a rat pressing a lever, let’s say, or even a person, you now, pressing a button. To view that as really a decision, as opposed to something that’s just apparently purposive, like the example of a reflex, it might depend, it should depend on two things.
One is that it depends on the the animal or the person’s knowledge of the contingency between the action of the outcome. So I know that by pressing this button, I get a Coke out of the Coke machine. Or the animal knows, demonstrably, that pressing a lever gets him food as opposed to something else.
And secondly, that it’s sensitive to the valuation or the knowledge of the status of the outcome, like the Coke or the food as a goal, as a valued goal of the organism. And this may seem sort of obvious, but lots of behaviors turn out not to meet these tests if you actually probe them. So you can see a rat lever pressing for food, which appears perfectly rational thing to do, or you can see someone, you know, opening up their refrigerator to look for food, and if you were to run tests of these that are meant to screen for sensitivity to the contingency of evaluation, you’ll find out that this behavior is more like a reflex than like a choice. That is, it is not sensitive to the value of the outcomes or the goal. So I think the kind of experience that we have with that, perhaps in our daily lives, as you open the refrigerators or, you know, check your e-mail, sort of automatically, without kind of thinking about it, you might even do that when you’re not hungry. And that would be an indication that this behavior is more a habitual, or reflex-like really, than choice-like.
This exact sort of thing is formalized in tests with animals. So behavioral psychologists for many years, and neuroscientists, have run what is known as devaluation tests with animals. And what they do is you can train, say, a rat to lever-press for cheese, and then you introduce some change in the value of the cheese, so you can, for instance, just feed the animal to satiety so he’s not hungry anymore and he’d refuse the cheese if you gave it to him, and then you can ask if he would lever-press for food that he actually doesn’t want. And if he does lever-press for food he doesn’t want, then again, that’s a sign that this lever-pressing is not, in fact, purposive, it’s not goal directed towards attaining the outcome, it’s not sensitive to the status of the goal as a desired valued outcome, but instead it’s some sort of learned reflex. The animal just learned to lever-press in the presence of the lever, and that behavior has become detached from the consequences.
And so, I think, an animal model is something we have often experienced in our lives that – you know, there’s a certain floor you always go to on the elevator, and so you get in the elevator and you push the button, even though it’s not really where you’re trying to go this time. You catch yourself, you go somewhere else, these sorts of habits where behavior becomes automatized or automatic with repetition and sort of detached from its actual goals. And that’s distinguished from more deliberative or purposive behaviors, which rats, even the humble rat, also demonstrate. So in other circumstances, rats really will selectively reduce their lever pressing for food they don’t want, demonstrating that it’s really sensitive to the value of the goal, and similarly for manipulations of the contingency.
So rats really do, and people, display purposive behavior, but not all the time, speaking to the multiplicity of these things. So these habits, the formation of these habits, and the behavior becoming automatic with practice, appears to relate to a chemical in the brain called dopamine. So dopamine arises from a small group of neurons in the sort of base of the brain, sort of where the spinal cord enters, in red here, that have these ascending projections up to other parts of the brain where they deliver this neurochemical dopamine. So neurons communicate with each other using chemicals, and different neurons have different chemicals; these release dopamine. Dopamine is broadly known to be involved in reward, movement, and more specifically, it seems to be the target of most or potentially all addictive drugs. One way or another, they work through juicing up dopamine, releasing dopamine, mimicking dopamine or preventing the reuptake of dopamine, somehow increasing the action of dopamine, and that seems to have this sort of addictive action.
And that in turn probably relates to dopamine being involved in producing these automatic behaviors, which is something I’ll come back to.
So as I’ve said, we have these habits, right, as well as the ability to produce more deliberative actions, and so given that both of these sorts of behavior can kind of co-exist, the brain has to be sort of smart about deploying one thing or the other, deciding when to deliberate, and when to just behave automatically, and that must be in some way adaptive. And so indeed, if you investigate these kind of things using probes like the one I’ve described, you find the use of habits both in people and animals very situationally. So under stress or distraction, differences in motivation or fatigue, people become more automatic, and conversely if they’re more relaxed and less distracted and so on, people are more deliberative.
There’s also differences between people. So for instance, there’s genetic variation that affects the action of dopamine, genes that we think are involved in producing these habits. Drugs affect it, and importantly, I think for this discussion, various diseases, either neurological, that is, affecting tissue of the brain, or psychiatric, also affect these processes, and I’ll come back to that in a sec.
So this issue of control. So far I’ve spoken about there being a couple of different routes to behavior, at least a couple, and reflexes are sort of another. What does it feel like to have to exercise control, to deliberate in the presence of an automatic action?
So it’s been studied for a long time, again, in neuroscience and psychology, that people have sort of prepotent impulses, and it’s possible to sort of override them, but it takes time and it’s error-prone, you’re prone to mistakes. So for instance, famously, there’s this test called the Stroop test, I don’t know if you’ve ever heard of it, but you ask people very rapidly to name the colors of words that are flashed up on the screen. I’m not going to do it today I don’t think it’ll work over the internet, but people are asked in this case, would have to say red, but the trick is that occasionally there’s an interfering word. So the word green is actually written here, although what they’re asked to do is name the color of the ink, and people will blurt out green and/or slow down.
So there’s a sort of cost of overriding this prepotent word-reading response. We’re all trained from young age to read words, and this has a sort of prepotent quality, such that you have the impulse to do it when it flashes up on the screen and it’s hard to kind of override it, it takes time and effort to really pay attention to not doing the wrong thing. And this is measurable, it’s a very robust effect.
And again, it’s disrupted in patients with neurological disorders and psychiatric problems. It seems in particular to involve sort of descending connections from something, from parts of the brain called the prefrontal cortex – that are in the front, you know, near the forehead. So this is a sort of a cross-section through the middle of the brain. I’m sorry I don’t have a pointer, I think if you’d see the pointer, but the very tip there, labeled 10P, is sort of where your forehead is, and then the green stuff behind is sort of toward the back of your head. The stuff at the front is important, partly because it’s involved in this kind of overriding and impulse control. And interestingly because it’s something that has grown disproportionately through evolution, so humans have a lot more of it than monkeys, have a lot qualitatively different than rodents do. And also because it’s in a sense, sort of topographically, or sort of in terms of connection distance, the farthest from the periphery, so you have wires coming in from from your eyes and your ears, and you have wires going out to your body to control muscles, then you have this thing that’s as far away as possible, in terms of neurons connected to neurons from all that, from the world, that’s the sort of control part of it, that’s this prefrontal cortex.
Just a few more things to say, and then we can push on to a discussion. So we think that this part of the brain, among others, is involved in sort of suppressing these habitual or reflexive or prepotent responses. One reason we think that is because of patients with brain damage – so there’s a famous case study, from about 150 years ago now, of a man called Phineas Gage, a classic paper, passage of an iron rod through the head, and his behavior was described in this case report. He survived an accident involving tamping gun powder with a sharpened stick, which I’m not sure why – you can see the stick in the picture, I’m not sure why he possibly used a sharpened one for this job – but the gunpowder exploded, and then the stick blew through his eye and the middle of his brain, and took out a big chunk of just a piece of brain I was showing you before, and he survived and recovered, but his personality was sort of famously changed, and he is described as having been previously very restrained, although he was a railroad worker:
“He’s fitful, irreverent, indulging at times in the grossest profanity (which was not previously his custom), manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires. His mind was radically changed, so decidedly that “his friends and acquaintances said he was ‘no longer Gage.’”
So it’s hard to know how much of this old case study is dramatized or overdramatized, but this is rings true with the behavior of many patients becoming disinhibited in various ways and doing things that are inappropriate to the context or the situation. This often comes up in social situations where we’re under very many restrictions as to how we should behave, and so behaving inappropriately socially is a sign, a symptom of this sort of damage.
One thing actually comes from the work with my colleague Claire Gillan. And this is just to say, we also think that imbalances between these more deliberative and more habitual impulses are a factor, potentially a causal factor, but certainly associated, with a number of different psychiatric disorders. So we identified a set of symptoms of a number of different psychiatric disorders, and I’ve sort of put a number of them up on the screen here, that span many different sorts of compulsions. Obsessive compulsive disorder, compulsive checking, drug abuse, like alcohol abuse, eating disorders, you know, compulsive purging, so compulsive behaviors, but also uncontrolled and compulsive or impulsive thoughts, disturbing thoughts that you can’t control.
So this is apparently, sort of, habits of the mind. And it tends that all of these symptoms across these different disorders tend to co-vary or to cluster with one another across people. This is a sort of common dimension of illness, and that, in turn, turns out, in the separate screens of response withholding and deliberation, to be associated with these people not being, and being more habitual, tending to be more habitual, and tending to be less goal-directed.
So the cause and effect here is, of course, not worked out as of yet, but it does seem as though a supposition that’s been around for a long time, that imbalance between these more automatic vs. deliberative impulses is at the core of these kind of situations, which we would describe as compulsive, where things like drug abuse, that have a compulsive character, that you do them, evidently, even though you know that you shouldn’t, at some level.
Ok, so that’s pretty much what I had to say. Just to sort of recap, and I’m hoping that this will trigger a discussion which brings in your knowledge of complementary topics, that really our ways of producing behavior in the brain are multiple. And in particular, the same behavior, like, you know, pressing a lever, pulling your hand away from something, it can happen for different reasons, like it could be more or less volitional or more or less deliberative, or more of a reflex or habit. So just observing an action, you might not know whether someone did it on purpose.
I should also say, habits are good, it’s not bad to automotize your behavior. It’s just like an autopilot in the airplane, it frees you up to do other things. So it’s an adaptive purpose, but it can make mistakes and particularly can do things that are contextually inappropriate, even though they’re sort of good usually. And overriding these habits, in particular ones that are involved in a particular kind of approaching and consuming a reward, for instance, it’s important to override them. You can’t just approach any biological reinforcer that you want and interact with it. And so it requires us to override these things regularly. This is an effortful process and it’s very sensitive to disease and to brain damage. And so that’s what I have to say. There’s a textbook chapter I’ve written on these topics that might be of interest and you can contact me if you lik,e that’s my address. So, thanks a lot.
Rabbi Geoff Mitelman: I want to thank Professor Daw for his time and his insight to be able to ask this question about whether neuroscience is undercutting moral responsibility. Shockingly, we did not answer that question here, but we delved into that question in a lot of depth. So thank you, and for those of you who missed the previous webinars “Are We Still Special If We Are Not Alone?” and “The Science of G’milut Chasadim,” you can see those on our website.
My name, again, is Rabbi Geoff Mitelman, and I’m the founding director of Sinai and Synapses, housed out of CLAL, the National Jewish Center for Learning and Leadership, who are the host of this series of webinars, that is actually being run by a larger project that is being spearheaded by the American Association for the Advancement of Science Dialogue on Science, Ethics and Religion. It’s been wonderful to partner on this series of webinars. So thank you all for taking time to learn this afternoon.