When do our brains behave more like scientists, and when do they behave like lawyers? What’s the difference between a “fact” and an “opinion,” and how do we respond to those perceptions? For Professor Tania Lombrozo, religion — and the many different types of relationships people can have to it — provides a fascinating case study for human belief and the social structures that scaffold it. So many of our current ideological battles, especially in the turmoil of the United States, hinge upon how people react when their beliefs are refuted, and how belief in facts creates different realms of truth.
Tania Lombrozo, PhD. is a Professor of Psychology at Princeton University, as well as an Associate of the Department of Philosophy and the University Center for Human Values. She received her Ph.D. in Psychology from Harvard University in 2006. Dr. Lombrozo’s research aims to address foundational questions about cognition using the empirical tools of cognitive psychology and the conceptual tools of analytic philosophy. She blogs about psychology, philosophy, and cognitive science at Psychology Today.
Watch the Conversation Here!
Next week, January 19th, at 2 pm Eastern, we will speaking with Michael Shermer, Ph.D., Founding Publisher of Skeptic magazine, Presidential Fellow at Chapman University, and author of New York Times bestsellers Why People Believe Weird Things and Giving the Devil His Due: Reflections of a Scientific Humanist, among others.
Read TranscriptWelcome, everybody to our fifth episode of “Sacred Science: Gleaning Wisdom from Science and Religion.” I’m Rabbi Geoff Mitelman. I’m the founding director of Sinai and Synapses which bridges the worlds of religion and science. Trying to explore and discuss some of the biggest questions that we’re facing in our world where we need wisdom from both science and religion, ranging from questions of genetic engineering to questions of climate change to something that I think many of us are thinking about now, which is, how do we have a shared set of facts and reality when we’re living in a very, very fractured world and country right now? And so I am really excited to be sitting here today with my friend, Professor Tania Lombrozo of Princeton University. And her research explores questions like why are we so compelled to explain some aspects of our social and physical environment but not others? And how does the process of seeking explanations affect learning? And how does the quality of an explanation affect our judgments and decisions? Do these features of explanation help us achieve particular goals? Or do they sometimes lead us astray, leading to errors in reasoning and decision making. And that’s something that we’re really grappling with and struggling with here, right now in our country as our capital was stormed, and half of the country is living in one set of reality, and another part of the country is living in another set of reality. So Professor Lombrozo, Tania, if I may, it’s wonderful to be sitting with you here this afternoon.
Tania Lombrozo: Thanks so much for inviting me and for including me in this great Sinai and Synapses series!
Geoff Mitelman: So I would love for you to be able to share a little bit about some of the work that you’re doing, and particularly some of the work that you do… of the phrase that we might call between “being rational” and “rationalization,” when you talk about that the explanations that we look for things. Sometimes we come at it with a particular goal, and sometimes we can be very objective. So what prompts us to say, like, “This is the truth that I want to find”? And what prompts us to be able to say, “I’m actually going to be a little bit dispassionate here and see wherever the evidence leads me”?
Tania Lombrozo: Yeah, let me say first why I think that’s so challenging. I think there are a number of reasons. But one of the reasons it’s challenging is because it’s not always clear what we should be counting as “legitimate” source of evidence or “legitimate” influence on our belief and what isn’t “legitimate.” So I’ll give you an example that comes from something we’ve been thinking about in my research. So suppose that you come across some evidence that incriminates a good friend, but the friend says, “Look, I didn’t do it, it wasn’t me, you’re my friend, you’ve got to believe me!”. You might feel compelled to think “Well, this person is my friend, I should give them the benefit of the doubt.”
And so on the one hand, you have this evidence, and that should affect your belief. But on the other hand, it might seem very reasonable that you should give your friend the benefit of the doubt. Now, in giving your friend the benefit of the doubt, are you rationalizing? Are you doing something irrational? Or is that actually a reasonable constraint on your belief? Should you maybe have a higher standard of evidence when it comes to your friend than when it comes to a stranger? So I think I raised this example to just hopefully get your intuitions going that there are real, complex, what philosophers call “normative questions” about how we ought to reason in a bunch of cases. It’s not always straightforward that “this is the evidence,” and these are the “illegitimate” influences on belief. And it’s a matter of drawing a really sharp dividing line.
Even acknowledging the complexities there, I think we can still ask a version of your question, which is, how can we – I mean, presumably, as individuals, we all want to be on the rational side, not on the rationalizing side. Right? So, maybe a way to re-ask your question is, “Are there strategies that we can adopt that would help us be on the rational side, rather than the rationalizing side?” And so I think one thing that’s really useful here is thinking about “What kinds of mechanisms do we see in science to do this?” Because science is arguably our most successful institutional effort to try to systematically get the truth, reality. And if you look at the way science works, we have all sorts of checks and balances, we have systems to try to acknowledge conflicts of interest, to make processes transparent, and reasoning and so on. And a really fundamental part of the scientific process is trying to think about possible alternative explanations for your evidence. So sometimes we do this as scientists ourselves, we think about, “Well, what are other things that could explain this? How can I test them?” Sometimes our very helpful reviewers and peers in the community will do that work for us. And we’ll point out all of the other ways of thinking about things that we haven’t thought about, but that’s a super important process, and it’s one that we can carry on in our everyday lives – we don’t have to be scientists to do this. So, in social psychology, one of the most effective strategies for “debiasing,” is a sort of a technical term here, our own thinking, is a strategy called “consider the opposite.” And it’s exactly what it sounds like. So, if you’re just considering one particular hypothesis, you’re going to naturally come up with all of the reasons why it might be right. But sometimes taking a moment to explicitly consider the opposite, to consider alternatives, is really simple, but it’s powerful. And it’s one thing that can help.
Geoff Mitelman: It reminds me of some of the conversations of when you’re presented with evidence, if it’s something that you like, and you want to be true, you ask the question, “Can I believe this?” And if it’s on the opposite, then you’re asked “Must I believe this?” And it’s a lot easier to be able to say, “Oh, this might be true, I can believe this.” And it just confirms my belief system, versus must I believe this, like, “I really don’t want to, do I really have to accept this?” And that’s a much harder thing to be able to do.
Tania Lombrozo: Yeah, that’s a really nice way to put it. I think it really illustrates how we might hold different kinds of claims to different standards of evidence. And some versions of that may well be legitimate. I mean, for example, if you’re thinking about whether or not a vaccine is safe, that you’re going to give to hundreds of thousands of people, potentially millions of people, you might hold yourself to a different standard of evidence than if you were testing an experimental treatment that’s only going to affect 10 people, right. And so there are lots of cases where things have the cost of being right or wrong, are going to depend on the consequences of how we act. So I think when it comes to our actions and our policies, we’re very used to thinking: you might want to hold on to different standards of evidence, because the consequences are different. I think when it comes to our beliefs, it’s a little bit more complicated. Maybe for our beliefs, we’d want to say you should just believe whatever the evidence points to, not kind of allow yourself to shift the standards of evidence depending on whether you want to believe something or not.
Geoff Mitelman: And you’ve talked about this in some of your writing also, and you actually brought this up at the very beginning, of your friend who’s been accused of a crime, that we use the word evidence a lot, but there are really two different realms where the word evidence is used. One is, in science, to be able to say the evidence for this or that or the other. And that’s ideally dispassionate and objective, and the evidence is supposed to show what you want to show. But the other realm where evidence becomes really important is in law. And there, actually you kind of do want to use motivated reasoning, you want to be able to use the evidence to build a case for or against someone. And sometimes our brains are lawyers, and sometimes they’re scientists, but I think our brains actually tend much more to be lawyers, they want to be able to find that argument and say, “This is the accurate thing,” rather than, oh, here’s what the world is, and I will be this disembodied intellect that will just find the truth with a capital “T.” Most of our life is trying to be able to make a case for one thing or another, which is the way the world needs to work, you need to make an argument for particular policies or where you’re going to live or how someone who you’re going to vote for, you need to be able to make those arguments there.
Tania Lombrozo: I think the majority of research in psychology supports what you just suggested, which is that we’re more often following the “lawyer” mode of first figuring out a conclusion and then marshaling the evidence, rather than dispassionately surveying the evidence and then coming up with a conclusion. But I do want to put in a plug for the view that we know we have to be capable of sometimes following the more scientific model, we see where the evidence takes us. Because ultimately, we are interacting with the world. The world operates according to constraints that are not the constraints we necessarily want it to operate under, right. So if you take a case, like an engineer trying to build a bridge, if that engineer was only operating by the lawyer model, and what they wanted to believe, and all of these other kinds of influences, then their bridge wouldn’t stand up, it would be a bad bridge, and we’d get this feedback from the world that we screwed up because it doesn’t work. So there are mechanisms for forcing us to, in some ways, be constrained by accuracy, because it is accurate beliefs that are the ones that allow us to make effective predictions to effectively control aspects of the world, to build the technology that we’re using right now to talk to each other. So I think even if we have lots of lawyerly tendencies in that respect, the fact that we need to effectively interact with the world is going to be something that pulls us in the other way, some of the time.
Geoff Mitelman: You know what’s interesting, and I’d love for you to talk a little bit about this and your work on science and religion, which is that facts are different depending on the realm that you’re living in. So there are scientific facts, and then there are what we might call “social facts.” The number of votes in Georgia in the 2020 election is not, I mean it is a physical piece of paper, but it’s also an agreement that will dictate who is going to be the president of the United States, that’s a social fact. Or religion is really… there are very few physical facts, it’s a lot of social facts, of an agreement that I’m a rabbi, if I go to get communion, that is not going to be a fact for me, right? The wine is not going to turn into Jesus’s blood for me, because that’s not part of my social reality. And so how do you distinguish between explanations and ideas from a scientific perspective, where there may be a physical fact that we need to be able to understand, a physical phenomenon, versus a social or religious or philosophical fact? Because I know you’ve done some work on how we approach those kinds of questions in different kinds of ways.
Tania Lombrozo: Yeah, I think that’s a really complicated question. Part of me bristles at the idea of talking about some of these aspects, part of what I think is an issue is, what do we consider the realm of the factual? I can tell you how many people think about this. There’s a lot of research that’s looked at which sorts of things we take to be a matter of fact, and what sorts of things we take to be merely opinions. And so there’s some cases where I think we’d get widespread consensus. If I think a square has four sides, and you think a square has five sides, we can’t both be right. There are certain kinds of mathematical facts and certain kinds of basic perceptual or physical facts. People tend to have this view where there’s one fact of the matter, one thing is right, if you get disagreement, you can’t have two people who are both right. But then you get other realms like preferences, aesthetic preferences, food preferences, so if I think chocolate is the best ice cream in the world, and you think vanilla is the best ice cream in the world, can we both be right? A lot of people are willing to say like, yeah, there’s a sense in which they can both be right. Or I think Picasso is the best artist and you think O’Keeffe is the best artist and so on. So it seems like you already get this variation, where sometimes people think something’s the fact of the matter and sometimes people think, actually, we can kind of have different, I don’t want to necessarily call them “facts,” but we can have different beliefs without thinking those beliefs are in conflict.
Religion turns out to be a really interesting case here, right? So suppose that you believe that there’s a God that has certain kinds of characteristics, and I do not. Can we both be right? How do people respond to that? Well, it turns out, they respond somewhere in between the way they respond to something like a mathematical or basic perceptual fact and the way they respond to something like in your preference. People are more inclined to say that we can both be right about that, whether that can be right for you and something else can be right for me than they are when it comes to how many sides a square has or whether two plus two equals four, or whether or not there’s a table in front of us. But one thing that we found, at least in our research, is that people who are more religious seem to think about the religious claims more in that objective fact of the matter sort of way.
Geoff Mitelman: Right. I was about to guess that.
Tania Lombrozo: So I think that answers part of your question, right? It says something like, “How do people tend to think about this?” Well, people do tend to think there’s variation in the world, in terms of what sorts of facts we have to agree on, and what sorts of things might be different for you, or right for you versus right for me, or we can sort of agree to disagree. But there’s also part of your question, how do I put this, there are cases where you might want people to have more agreement than we currently observe, right?
Geoff Mitelman: What could that possibly be referencing right now!?
Tania Lombrozo: And I think that raises a puzzle about “Why don’t you see that kind of agreement?” And especially if you’re on the side – a religious person can have a version of this too – but if you’re on the side of thinking, “Look, there’s just a fact of the matter about how many votes are in Georgia. It’s just an empirical question. You just go out and measure it.” And maybe we have a little bit of measurement uncertainty. But that’s just a factual matter. Like there’s no further complexity there. Sometimes it’s hard to reconcile how you could get the kind of disagreement that you see.
Geoff Mitelman: And it reminds me, I think Yuval Harari talks about intersubjective reality. And so there’s objective reality of “a square has four sides,” there’s the opinion of chocolate versus vanilla. And then there’s what he would call intersubjective reality, which is something that exists only because we all agree that it exists. And the examples that he talks about are nation, religion, and money. And there’s an interesting video on YouTube of “How many countries are there?” And it’s not just a go-and-count of how many countries there are because there are places where, “Is this actually a country?” Is Kashmir a country? Is the Palestinian Authority, is that Palestine? Is that Israel? Is that the West Bank? Is that the occupied territory? Depending on who you ask, it’s not a simple question there. Like, everybody agrees the United States is a country, but there are different parts of the world where “Does this country really exist?”, I think very similarly to “Does this religion exist? Does God Exist?” Is not an objective question. It’s a lot of questions in the way of “Does this nation exist?” And some people, most people, if everyone says “Yes,” that’s going to take us in one direction, if it’s, well, it’s an opinion, that’s also problematic as well.
Tania Lombrozo: Yeah, I mean, so some vocabulary that philosophers sometimes use is they’ll talk about “natural kinds.” And those are supposed to be the kinds or categories that just exist in the world. And it’s a matter of going out and discovering them. People talk about sort of carving nature at its joints. And we often think about things like gold as an element, is something we just discover, that’s not a social construct, that’s a thing we discovered about the world as a meaningful category. But then you get to things like, “What is it for something to be a country?” “What is it for something to be a person?” All of these other things which become, arguably, not “natural kinds,” they’re other things. And I think there’s a real danger sometimes, thinking when it comes to everything beyond the very clear-cut factual, natural kinds of the world, that it’s just a matter of opinion, and anything goes. And I think that’s a mistake. I think even when you’re in that category, where you’re talking about social kinds, or moral kinds, and institutional kinds, there are still real conversations that we can have about better and worse ways to describe things.
Geoff Mitelman: And yeah, I think what’s interesting of religion and seeing a lot of the ways, by the way, Judaism is often portrayed in interfaith weddings and interfaith couples, in television, which is often portrayed as like, “Well, it’s all fine, we all love each other, and it’s all good.” That might be the case, but it’s also not as simple as that to be able to be like… an interfaith get -ogether, of “We all believe in the same God and it’s all kumbaya.” Well, no, there are real differences here. Because you also don’t want to say, “No, the only accurate thing that can happen is, we should only be representing halachically observant Jews. And that’s what we should be representing,” or “No, everyone should be Protestant, and that’s the only kind of conversation, that’s the truth.” That’s also really problematic. And so how do we navigate that middle ground where it often is perceived and shown as these two absolute poles, when, in fact, most of our life is somewhere in between?
Tania Lombrozo: Yeah, that’s such a hard question. There are so many parts to it. But let me say a couple things. So one is, a few years ago, I got in trouble with a blog post that I wrote for NPR, where I saw a lot of what seemed like unproductive discussions about these kinds of issues. And so I tried to argue for an alternative to Common Ground. So the idea behind Common Ground is that when you’re thinking about this, what you should do is figure out what we all agree on. And that should be the basis for some sort of shared understanding or shared reality. And so maybe we can all agree that we value love, or we should all love each other, and then that becomes the foundation for something. And it seemed to me like it’s problematic to assume that there is going to be common ground, there might not be, but we should still be able to have a meaningful engagement.
And so I argued for what I called Charitable Ground. And the idea was that it doesn’t come from the presupposition that if you and I disagree, that we necessarily are going to find points of fundamental agreement that we can build up from, but we should nonetheless interpret each other charitably and this and I should assume that you hold your beliefs for what you take to be good reasons. And that you assume that I hold my beliefs for what I take to be good reasons. And from that starting point we can have a conversation. And to be frank, this is, in part, in response to certain kinds of rhetoric you see from people who may or may not be named Dawkins, among other things, where it feels to be like you don’t see that kind of charitable assumption. You see an assumption that people hold certain beliefs because they’re stupid, because they’re ignorant. And I think that’s a very unproductive starting point for conversations. I think that might be one part of the story: “Let’s not assume common ground, but let’s start from this assumption of charitable ground. And let me start to understand what you take to be good reasons for your belief. And then we can have a conversation about why should that be a good reason for belief or not.”
Geoff Mitelman: And that links actually to a text in Pirkei Avot, in rabbinic literature, where it said that you should judge everyone in the pan of merit, which doesn’t mean everything that someone says is going to be accurate. And it doesn’t mean I’m going to agree. But it means that when they are speaking, I am going to assume that they are coming at it from a perspective of integrity, or at least self-integrity, and being able to say this is why I’m holding this. I’m not saying this to be able to anger you, I’m saying this – which is not always the case online – but to be able to say, I’m saying something, and I’m going to judge you not as a jerk who’s trying to make my life difficult, but as somebody who’s trying to be able to explain their perspective. I think that’s a very helpful point of view, and a way to move forward a little bit.
Tania Lombrozo: Neat. I’ll have to follow up to get that reference from you. I think another thing I want to add. I have to confess, I have a harder time knowing how to think about the religion cases that you’re thinking about. But I think there are a lot of cases that aren’t religion and aren’t science that are maybe easier to think about. And something there might be like a moral disagreement. And so that’s a domain where I think sometimes you get people wanting to assimilate moral things to scientific things and treating it as if it’s just a fact of the matter. “This is the way it is morally, we’ve discovered it, end of story.”
And the other extreme would be people being a certain kind of relativist, where they say like, “Oh, well, anything goes, that’s right for you, but not for me.” And I think both of those are kind of problematic, because we’re not just going to figure out what’s morally right by doing science. But it also seems like it’s very wrong to say, “Anything goes and any moral perspective is equally valid.” How can we begin to have conversations about that and figure out what the morally right view is?
And there I get a lot of inspiration from looking at how philosophy works. Philosophy is a potentially secular – I mean, historically, often tied to religion – but potentially secular set of strategies and approaches for thinking about how you construct a good argument, that might be informed by what we know from science, for example, but in domains where science doesn’t give us the answer, and there are going to be better and worse arguments. It’s not the case that anything goes.
Geoff Mitelman: And, I’m thinking, Paul Root Wolpe, who’s at Emory University. He gave a TED Talk a couple years ago, and one of the things that he said is, “Everyone can complete the sentence, ‘ethics is about the difference between right and…’ and everyone yelled ‘wrong’”. And he says, “Right, that’s incorrect. Ethics questions are about right versus right.” What happens when you’ve got competing values, right? It’s not, “Should I murder this patient or not?” If you’re a doctor, that’s not it. There’s no doctor that’s making that decision of “Oh, I know, I want this person to die.” But it’s rather a question of what should be the compassionate way if this person isn’t responding to the treatment here? What’s the relationship between the patient and the patient’s family? What’s the relationship of privacy elements? That’s where there are real questions here.
And that’s where I think we need elements of philosophy, and law, and boundaries that can also potentially shift. Like being able to say, “Here’s what the boundary is right now. But new technology, new information that may allow that boundary to shift if it needs to because of new information, or new ideas or the world has changed,” right, the way that we think about LGBTQ marriage: the view on the morality of that has changed drastically in 30 or 40 years, even in the last 10 years there. What is right is often very socially constructed. But that can shift and change based on different knowledge. And sometimes that’s informed by natural science and sometimes that’s informed by philosophy and religion.
Tania Lombrozo: Yeah, I think that’s right. I mean, sometimes people make a distinction – which is related to the one that we’ve been talking about between facts and values, right – where science can tell us about the facts, but it can’t tell us about the values. I think that’s mostly right. But then I think you often get these very complicated cases that are right at the beginning, and it’s not so obvious what our source of values is, and what’s going to be a good argument for values and so on.
Geoff Mitelman: Yeah, I think that’s part of what’s challenging. I mean, I would love to hear a little more about the work that you’re doing, of what’s viewed as a fact versus what’s viewed as an opinion, and how does that play itself out in terms of people’s responses or in other ways in which they’re interacting in the world? Because if you’re saying, “This perspective is a fact,” that often shuts down future conversation –that doesn’t open up a lot of charitable conversation. Because if I’m right, and you’re a Nazi, then obviously I don’t want to talk with a Nazi. That’s a very clear distinction here. So how do we move to a conversation where people are thinking about the discussions of “what’s a fact” versus “what’s an opinion”?
Tania Lombrozo: Yeah, I think saying some things are mere opinion can also shut down conversation too, right? Because if I say, “Well, here’s my opinion, I like chocolate ice cream.” And you say, “Well, here’s my opinion. I like vanilla ice cream.” And we’re like, okay, we have different opinions at the end of the conversation, right? So I feel like there’s actually kind of something you want in the middle there, where we can agree that there’s a shared subject matter we’re trying to get to the truth of.
So there’s a little bit of research suggesting that people approach – this is not my own research – but suggesting that people approach these issues differently depending on whether they’re in a competitive or cooperative sort of mindset. So if we’re in a mode of thinking that we’re debating each other, we’re trying to figure out the single truth here, about which ice cream flavor is best or which conception of God is best, or which political view is best. If we go into it competitively, we’re more likely to approach it as a matter of right and wrong, where there’s a single fact of the matter we’re trying to get at. If we approach it more cooperatively, like “Let’s together try to figure out what’s going on here,” on the one hand, that leads to what’s probably more constructive conversation, but it also seems to lead to a more subjectivist sort of view of what the truth is like. So that’s one sort of interesting tension, I think, in trying to generate constructive dialogue.
In my own research, one of the things that we’ve looked at is what people take to be a good reason for belief, and how that might vary across types of people. So some of the work that I’ve done is with someone who’s now a collaborator, Emlen Metz, who did some studies that were motivated initially by interviewing people who had different views about human origins in terms of their views about creationism and evolution. And one of the things that she noticed in these interviews were that these two different communities of people she was talking to seemed to talk about different reasons for belief. So the people who endorsed evolution would talk about scientific consensus and the evidence and so on. Sometimes the people who were creationists would talk about what they feel in their heart to be true, or what those that they love believe, and that was why they believe this.
And so this led to some more systematic follow-up work, looking at “What do people take to be a good reason for belief? And does this vary across science and religion?” So one finding is that if you compare people who are more religious – and in this particular context, it was people who endorse some form of creationism, versus people who endorse some form of non-theistic evolution – you do find that, on average, they consider these things to be differentially good reasons for belief. So the people who are on the side of creationism think that what you believe in your heart is a good reason for belief – what those who you love believe, that’s a good reason for belief. What you think would be morally good, that’s a good reason for belief. Whereas the people on the science side are more likely to say things like “Scientific evidence and scientific consensus are good reasons for belief.”
What I think is perhaps even more surprising is that within the same person, you can get people endorsing different reasons for belief, depending on whether it’s in the domain of science versus the domain of religion. So we did these experiments where we went through this very convoluted procedure to try to find one some scientific claim and some religious claim that a given person endorsed equally strongly. So if somebody gave a confidence rating of seven out of – I don’t remember what our scale was, but say six out of seven, that God exists. We had to find some scientific claim they also endorsed with a rating of six out of seven. So they might be as confident that God exists as they are that there are tectonic plates, or something like that. And we find these two kinds of claims. And then we would ask them questions about well: “What do you think is a good reason for belief about these specific beliefs or about the domain in general?” And even the same person will say that when it comes to scientific claims, good reasons for belief are things like scientific evidence and scientific consensus and so on. But when it comes to religious claims, good reasons for belief are things like “What I feel in my heart,” “What those I love believe,” “What I think is morally good to believe,” and so on. So you have all of these considerations that are not what we typically, at least, think of as sources of evidence. I mean, there might be a further conversation about what is a real source of evidence, but we wouldn’t typically think of these things as sources of evidence. And then the less people think they’re a good reason for belief.
Geoff Mitelman: And it sounds like it’s almost that some of the challenges that we’re facing now. I mean we’re seeing, for example, a lot of Republicans saying “We should have unity here.” And a lot of Democrats are saying, “Wait a second, we need accountability for what happened right now. “In many ways people are talking past each other, because they’re not even really agreeing on what should be the terms of the argument. What is a good reason to believe and then act on something? We’re not even talking in the same language about what should be the evidence to be able to move forward on some of these choices.
Tania Lombrozo: Yeah, that’s right. And I think one of the differences here is what people are taking to be a good reason for belief. One of them is how they’re construing relevant events, right. But another one, too, I think, which comes up in the domain of politics, is going to just be “What do you consider to be a relevant authority?” and the source that you trust and defer to. In most realms of human life, we’re not the personal experts on things, we defer to people we trust, right? So most of us, when we go to take a Tylenol for a headache or something like that, it’s not because we ourselves have done the research to figure out whether or not this is an effective way to reduce headaches or anything like that. It’s because – I mean we perhaps do have some personal experience – but largely, we’re deferring to a certain community of experts who we think are trustworthy in this domain for various kinds of reasons.
And so I think, in the case of science, even there, it’s not straightforward, who we should count as an expert. But it’s at least more straightforward than it is when we get to domains like, well, who is the expert, when it comes to political matters, moral matters, religious matters. I mean, within religious communities, you might have well-defined structures of deference and authority where you might think, “Well, you’re supposed to defer to this rabbi, or this particular court structure, and so on.” But I think it’s really not at all clear how you defer. So if you have people who are starting out in different places, in terms of which sources they consider to be the “authoritative sources,” it’s going to be very hard to get everybody to agree, because the disagreements go very, very deep, to the very sort of foundations of what we even consider to be a reliable source of evidence and a reliable authority in the first place. And that’s a very difficult thing to change.
Geoff Mitelman: This is actually something that came up in our conversation last week, which is the phrase that’s been going up a lot and going out a lot, is “Trust the science, trust the science.” And that the question is, should I trust Dr. Fauci ,or should I trust my aunt on Facebook who said this? That’s actually a very easy piece on this. But if it’s a question of “Who do I trust in terms of being able to roll out the vaccine effectively, and to be able to make sure the right people are getting the right vaccination for the right reasons?”, that’s not a scientific question. That’s an ethical question. That’s a policy question. There’s argumentation that can happen there.
And I think without having those conversations, we’re losing a lot of what we’re able to be able to do. And I think being able to have religious leaders being able to say, “I am going to take the vaccine when it’s ready,” that would be a very powerful statement for a lot of people here. And there are also elements of identity markers, of saying, “I am not going to wear a mask because of X or Y or Z” or “I am not going to get the vaccine because of X or Y or Z,” and that denigrates what public health is, but that’s because their reasons for doing that are different than a scientific reason. I think it’s more of an identity piece of that.
Tania Lombrozo: Yeah, that’s right. I mean, one distinction that I find useful in my research – and that I think other psychologists talk about as well, this isn’t just me – is we often differentiate, in my work, between epistemic considerations, so considerations that have to do with getting accurate knowledge representing the truth, trying to get an accurate representation of reality. And non-epistemic reasons could be moral reasons, could be social reasons, could be personal preferences, and so on. And so you might think that, broadly speaking, science is an epistemically motivated kind of enterprise, where we’re trying to get an accurate representation of the world. Whether or not we like it, whether or not it’s ultimately morally good or not, we’re aiming for accuracy there.
The first example I gave you about giving the benefit of the doubt to a friend, you can think of that as a tension between an epistemic and a non-epistemic kind of consideration, right? On the one hand, the epistemic consideration is you want to be accurate in your beliefs, you want to base them on the evidence. On the other hand, you might think that there’s a loyalty kind of consideration. That’s a non-epistemic kind of consideration. And so one thing that has been argued for I think, pretty convincingly, recently is that a lot of things that, on the face of it, seemed like they should just be epistemic claims. Things like,”Iis human activity contributing to global warming?” “How many counts were there?” “What was the vote count in Georgia?”
Geoff Mitelman: I’m seeing this with racial justice and gender equality and different pieces there as well.
Tania Lombrozo: Yeah, that’s right. Claims about racial difference. Some of them, not all of them, but some of them, arguably, are just descriptive claims that are either true or false. But what you see is that they start to take on a lot of non-epistemic kinds of roles. And so you already alluded to one of them, which is something like a social signaling saying which club you belong to, right. So if you’re wearing a mask, you’re in one club, if you’re not wearing a mask, you’re in a different club, and it might be really important to you which club you’re in and to signal to other people which club you’re in. But now you’ve taken something that you, in that case it’s also a policy issue, so that one’s complicated. But if it’s something about saying “The vote count was this,” or “Humans are or are not contributing to climate change.” On the face of it, it looks like a claim that should just be about epistemic stuff, just “Is it accurate or not?” But in fact, it takes on all of these other non-epistemic kinds of factors that influence what people end up believing.
Geoff Mitelman: And I’m wondering, part of the conversation that’s happening in our country right now, is that many of the conversations that we’re having with the people that we agree with are all talking about it in an epistemic way of saying, “How can all those other people not understand the way the world is?” But trying to talk to somebody who has a different epistemic worldview, they’re not going to change their epistemic worldview. And trying to be able to say, “I think Democrats and liberals tend to talk a lot about data and facts and things along those lines,” where in fact, I think it’s questions of relationships and emotions. And what’s interesting is that it’s the people on the right, actually, who have tended to say, “We’re all about the facts, facts don’t care about your feelings,” and things along those lines. And in fact, I think there’s a lot of emotion that happens there as well.
Tania Lombrozo: Yeah, I have a fantasy I’ll tell you about – maybe you can tell me how to realize it. So I think a lot of the cases that you’re talking about are problematic precisely because people don’t recognize what they are endorsing or doing for epistemic versus non-epistemic reasons. And my fantasy is that somehow people could have a clearer notion of this, and that this could help with a lot of disagreements.
And I’ll give you an example of something that I think is a little like this. So you might believe that your kids are the best kids in the world. And I might believe that my kids are the best kids in the world. And there are some sense in which you might think “We can’t possibly both be right.” But I also suspect – I can’t speak for you – I suspect we would both be willing to say, you know, really, “When I say my kids are the best kids in the world, I don’t really mean that as a claim subject to epistemic evaluation, I’m not really saying that as something that aims at accuracy. When I say that, I’m saying something about how much I love my kids and how much I value my kids.”
And so even though on the face of it, it might seem like it’s something subject to epistemic evaluation, really, there’s something else going on in that case. And my fantasy is that people could more clearly demarcate why they might be inclined to endorse certain kinds of things, such that when they say something like, “I don’t think humans are causing climate change,” they could actually take a step back from that and say, well, “Actually, you know, I’m not really committed to the epistemic claim there. What I really want to express is this other stuff about these other values that I consider to be important.” And if we could somehow cleanly separate this out, I feel like we will be in a much better place to have a conversation and to come to some kind of agreement. But it’s a fantasy, I don’t know how to do that. So if you have any ideas about how to do that…
Geoff Mitelman: Well ,there was a question that came up that links to this about, about the link of science and Jewish law, that there’s no archaeological evidence of the Exodus, but we believe it occurred. And I actually think that links nicely to, at least within the Jewish conception, of a phrase that is both wonderful and problematic for a lot of people, of the Jews being seen as the “chosen people,” right. God chose the Jewish people, but I think you can see this in other religious communities as well, which is God having a relationship with an individual or a community in this kind of way. And I think a lot of Jews find it both inspiring, and really don’t want to be able to say like, “Okay, we’re the best.” We’re not going in that kind of direction, but it does mean that God has a relationship with – if you’re Jewish, I believe that God has a relationship with the Jewish people. Or if you’re Christian, “God has a relationship with me personally and or my community in this kind of way,” that I don’t think it is necessarily an epistemic way, right? I’m not going to be able to prove this scientifically, in the same way, I don’t think that we could prove Jesus was resurrected from the dead, I don’t think we could prove that the Red Sea split, I think that trying to do that in an epistemic way ends up becoming problematic. But with it being able to say, when I say God has a relationship with the Jewish people, that there’s an analogy of that, of how I love my children – that there is a relationship and there is a history, and there is a connection that is there that I want to build on, and I find valuable in and of itself, rather than having a “scientific proof” in this way. And in raising this question that Jewish holidays and Jewish ideas, they’re based on astronomy. And we don’t look at astronomy necessarily, in Jewish law, to be able to be astronomers. We do it to be able to look at it for holidays and the times of the year, and those different pieces that we can use the scientific knowledge. But it’s not always just for the scientific knowledge, it’s sometimes to be able to build and connect on that relationship.
Tania Lombrozo: Yeah, that’s interesting. I wonder if Judaism offers any vocabulary that would help here. I feel like sometimes the problem is we don’t have a good way to say, “I believe this for reasons that are not epistemic, but it’s still really important, but it’s still really valuable. But it’s not just a mere opinion.” At least I feel like I’m lacking a vocabulary that allows us to say, “I can say this is not factual, but I also don’t want to say it’s a mere opinion or all opinions are equally good, and that it’s okay for you to prefer vanilla ice cream because clearly chocolate is better.” I don’t know how to talk about that middle ground. We want to be able to say, “This is really valuable, it’s really important. And here are my perhaps non-epistemic reasons for thinking it’s really important.”
Geoff Mitelman: One of my favorite lines that someone said about the Torah, at least, is that something doesn’t have to be factual for it to be true. And I think maybe one way to be able to think about this is fiction. And I don’t mean fiction as in lies, but if you watch any kind of – particularly if you watch Pixar movies, are often really good pieces of literature that the creators will say, “We’re not looking to create a factual representation of what’s happening, we’re looking for–” the phrase that they’ll use is, “the emotional truth of this”. To be able to say, “I can resonate in this kind of way.”
And so I think it is without talking about it as being a pure opinion in this kind of way, and that’s actually part of the challenge that’s happening right now in our country is that the questions are deeply emotional, existential questions. And if we’re trying to talk about purely as vote counts, I can post this and then you can post that in response, and you can post this then you can post that. But one of the big things that people talk about is the community organizing is actually really the way to be able to get people connected, because you’re building the relationship and finding what gets people emotionally invested in this kind of way.
And I think that’s a real challenge there. Something about being true and being valuable without necessarily being tied, I don’t want to say it this way, but not being tied down to what’s being factual here.
Tania Lombrozo: Yeah, that’s interesting. I balk a little bit at calling it true.
Geoff Mitelman: With a lowercase “t.”
Tania Lombrozo: I definitely would want to have a way to talk about how this is really valuable, really important, even if it’s not “factual.” And I’d want to put truth on the factual side, but this might be just a semantic dispute, I’m not sure.
Geoff Mitelman: And there may also be an element of the social facts. What are the things that we agree on? Everything from law, to countries, to religion, right. Those are pieces that exist because we all agree that they exist here. There was an interesting question that came up from Gail, who said, “These ideas remind me of research where it’s easier to convince someone with strong but not factual beliefs to engage in conversation if you ask where they learned the information or how they learned it to be true? Can we prompt people to reflect in this way spontaneously?” Prompt, but not necessarily teach. And I also think there’s also the question of the backfire effect. So how do you open it up so they come at it from a charitable ground of being able to change someone’s mind, rather than an attack and a counterattack?
Tania Lombrozo: Yeah, that’s a great example. I didn’t know that finding, that’s definitely relevant.
Geoff Mitelman: Yeah, I’m curious as to if you’ve seen ways of the way that they approach these questions of how they know or how they come to these belief systems? If those kinds of questions, if you’ve seen ways where people have changed their mind on different pieces?
Tania Lombrozo: I don’t know of research looking at that specifically, but I also don’t know the finding that Gail mentioned. There’s probably more work out there than what I know of. I do know an interesting line of work that I think just helps explain why you sometimes get something like a backfire effect. So I’m guessing many people know this, but a backfire effect is a case where you present evidence for some proposition, and rather than that convincing people or shifting their beliefs towards the proposition, it actually backfires, and they go the other way. So an example would be giving them evidence for anthropogenic climate change. But in fact, now they deny it even more strongly.
And at least some of the time, it seems like part of what’s going on there is what philosophers of science have called “auxiliary hypotheses.” We have all these other ideas that we can call up that allow us to explain or explain away observations that maybe don’t fit our expectations. So in that particular case, there are some common sort of conspiratorial ideas, that there’s a lot of scientific consensus on an issue because scientists have particular kinds of motives, and they’re being manipulated by various sort of goals about getting grants and the way science works, and so on, lots of misconceptions about how science works. But for somebody who has that view, if you give them lots of evidence of scientific consensus about anthropogenic climate change, rather than making them more convinced that it’s really happening, they might take that as better evidence that there’s something conspiratorial going on, that there’s some coordinated nefarious effort. And so the very same piece of evidence can be interpreted in very different ways, depending on what other kinds of background assumptions you bring to bear on the case.
So I think thinking about those cases suggests that a lot of it does have to do with what’s the baggage you bring to the situation? And that baggage includes these background assumptions, but also things like which kinds of sources you take to be authoritative and that you’re drawing upon. And so if you want to actually have real change in people’s beliefs, you might need to go deeper, not just to how do they interpret this piece of evidence? But what are the background beliefs that they are using as a lens through which they interpret that piece of evidence? And what are the sources that they consider to be reliable so that they take those background beliefs seriously, and so on, you’re going to have to go back several steps to get to a point where maybe you can find something more like common ground or charitable ground from which to begin a dialogue.
Geoff Mitelman: There was a question that just came up, too, asking can moral and ethical relativism be evaluated with your paradigm? And I think that’s part of the challenge, because you’re using a relativistic paradigm to be able to deal with questions of moral and ethical questions. Can we be using this question of facts versus opinion? Is that also a question of moral and ethical relativism of having the scale in this kind of way?
Tania Lombrozo: I see. I guess one concern is you might think it’s just a matter of opinion, what counts as a fact versus an opinion. That would be absurd. But it sounds like another version of it might be a kind of skepticism that there even is a distinction between facts and opinions. Isn’t this already assuming something? I think there’s a certain kind of radical skepticism that I do not have a ready response to. I mean, there’s a certain kind of skepticism that Descartes tried to start with, – “Let me assume that I’m not starting from nothing, and where can I get to? Or I could be a brain in a vat, and how do I know I’m not a brain in a vat?”
The truth is, I don’t think I have a good solution to those kinds of deeply, deeply skeptical worries. But those are also not the kinds of skeptical worries that keep me up at night, at least, perhaps they keep people up at night. I think there’s another kind of skeptical worry, which is more just, we look at other people in the news who have views that seem crazy to us, right? And this happens on both sides of all of these debates. I’m sure everybody thinks like, “How can that person possibly believe that? That seems insane? How can they take that source of evidence of that source of news seriously?”
And then there’s a skeptical worry that sort of does keep me up at night more, which is, well, how do I know I’m not in that situation? How do I know that I’m the one on the right side of this evaluation? And I think for that kind of skeptical worry, I’m willing to grant myself certain kinds of basic assumptions: that there is a shared reality, that induction works. The idea that we can use past experience to make reasonable inferences about the future. Very basic kinds of assumptions that I think are foundational to the process of science. I’m kind of willing to accept those. And I think once you accept those, it’s not much further to think that there are certain things that are a fact of the matter, and that we can investigate those. I don’t think it tells us how we get values and how we figure out good reasons for values. I think that’s it for the conversation. But I think that just if you take science seriously, I think you quickly get to the point where there are certain things that are fact-of-the-matter. And if you don’t take science seriously, I think you have a real challenge explaining how we created the technology that’s allowing us to talk right now. It’s miraculous that we were able to do that if we don’t have these processes that actually allow us to use our experience to make reliable inferences about the way things are.
Geoff Mitelman: Right. There’s a phrase that is a little bit loaded, but I’ll use it anyway, which is that the word is faith. The Hebrew word for faith is emunah, which really means trust, which is: “I’m going to trust that this is accurate.” I do not have the tools to be able to evaluate the work that you’re doing, I don’t have the tools to be able to evaluate the effectiveness of the vaccines, I actually don’t have the tools to be able to effectively evaluate political arguments, because you need a certain level of expertise in this kind of way. So I need to be able to trust the people who are making these kinds of decisions. I think one of the biggest problems that we’re facing is that we’ve lost a lot of trust in our leaders. I think we’ve lost a lot of trust in ability to be able to find shared realities here. And so being able to find “What is that shared reality? Where can I at least find a point of charitable ground that at least could potentially allow us to move forward?”
But we also are human beings. I have a few friends who are still very convinced about a lot of different things over this last week, and I have been very tempted to be able to post responses to them, and I am holding back on doing that, because I’m thinking what would be the point? I’m not going to be changing their mind in this kind of way. I think living in all these alternate realities has become really difficult and we need to be able to find out, where can we find points of connection and at least shared understanding?
Tania Lombrozo: Yeah, I wish I had good answers! I share this set of complex concerns about this. The problem you pose operates at many scales. And so you might not want to start a fight with one of your Facebook friends, for example, because you don’t think it’ll go anywhere. In the same way that at Thanksgiving for many families, politics are kind of bracketed, right, we kind of just agree to disagree for that evening so that we can have a nice family experience. And I think that might be okay. But I think we can’t have that attitude towards the larger societal issue. So at some scale of this problem, we’re going to have to engage and be willing to have the hard conversations. And what I don’t know is what’s the most productive and effective way to do that?
Geoff Mitelman: Right. I think that’s the hard challenge. My rule of thumb is often: is this going to be a constructive conversation? And that’s why I tend not to talk to a lot of people like Richard Dawkins. And I also don’t talk to a lot of people, necessarily, who are Young Earth creationists to be able to talk about these kinds of questions ,because I have trouble believing that they would be able to hear me, and I would have trouble believing that I would be able to hear them.
And yet, there actually is, I think, a wider swath of people than we might expect who would at least be willing to engage in this kind of conversation. I think we need to be able to be more charitable, to be able to say, “I can engage in this conversation and it may be more constructive than I would think. And the only way I would know is if I engage in this conversation.” But I think, probably coming back at it, of “How and why are you such an idiot?” is not going to be an effective way to be able to move that forward.
Tania Lombrozo: Yeah, I think you’re right. There are sort of different types of people’s ways to characterize it. But there are also different contexts. And I think that’s something that might require further thought. I’ll give you a personal example – this is an anecdote, I don’t have good evidence for this – but I’ve been vegetarian for a long time. And one thing that happens when you’re vegetarian is that you sit down for a meal with somebody new, or you’re going to their house for a dinner party or a restaurant, and it comes up that you’re a vegetarian, and often they ask, “Oh, well, why are you vegetarian?” Or they tell you something, right? And you have to decide, “Am I going to, in this moment, engage in the arguments for why I think it’s unethical to eat meat?”
And so for a long time, I had the policy that I would tell people, “I’m really happy to have this conversation, because I have thought about this a lot, and I have opinions about it. But as a policy, I will not have this conversation over a meal.” And that was because people are very uncomfortable, and very defensive, if you try to engage them on these issues while they are eating meat, or wishing they had ordered the meat, or resentful that they have ordered something else.
I wonder what the equivalent of that is for some of our current issues. I think right now, it’s especially hard to engage because we’re trying to engage over the fraught issues in a moment when they are consequential. I don’t think we have the luxury of waiting for a moment where things aren’t fraught. But might there be ways to start, not with those fraught issues, or not in the most fraught contexts, but in the ones where we can maybe think a little bit more… in a way, where we’re not so personally invested in the moment, in a way where we’re not so defensive? I don’t know what the equivalent of the conversation that’s not over the meal is for politics, maybe you have ideas.
Geoff Mitelman: Right, I think that’s an interesting idea. Sometimes we need both the rational and the emotional, because the emotional is what gets us riled up. One of the most powerful things that I saw was about Sandy Hook, which said “Sandy Hook did change everything – people decided killing kids was okay.” We’ve lost the emotional valence there. We’re seeing people who are so upset that Twitter got rid of their bot followers, and they lost 30,000 Twitter followers, and they’re not concerned about the fact that 3,000 people are dying every day because of COVID.
And so we need to be able to have that emotional connection – that emotional relationship. But I think some of it is how do you have the parameters of that conversation, so that there’s an agreement of “What are we going to be talking about in this kind of way?” And it sounds like also, to be able to say, “I’m happy to talk to you about this, but I’d like to know why you’re asking it. I’d like to know, what are you hoping to be able to get from asking me this question?” And that probably allows them to know, “I’m sort of curious about your story,” without feeling like it’s a judgmental thing, and can actually then have a wonderful meal in this kind of way. And that allows them to be ready for whatever the conversation is, and that may then change what their perspective is going to be.
Tania Lombrozo: Yeah. I’ve only seen sort of indirect evidence for this, but I think there’s a compelling idea in a lot of psychology that if you can first establish the human relationships, just as normal human relationships – “We’re both parents, we can bond about that. We’re both people who have or haven’t had this medical issue, or this travel experience, or all of these other facets of life, or we both like cooking.” There are all of these things that you can create a human connection around that are not the religiously fraught ones, the political problems, and so on. If you have that foundation, you’re then going to be in a very different position for engaging these issues than if the only point of engagement and the entry point for engagement are these issues.
Geoff Mitelman: And there’s a phrase that we’ve been using a lot, which is “relationship equity.” You need to be able to build a lot of relationship equity. Because the relationship equity – there’s value in just simply having the relationship there. But then you can draw on it if you need to. But if there’s no equity there, then it degenerates. But being able to build those conversations about what seemed to be silly, banal things, they actually build relationships and connections and allows us to be able to help us move forward – God willing, that’s our hope, at least.
Tania Lombrozo: Yeah.
Geoff Mitelman: Well, Tania, thank you so much for taking the time to be able to think and talk about these questions. I wish we could solve them all here in our hour, but your insights and your work of trying to explore these different kinds of questions, help us give an insight into who we are as individuals and as a society and our relationships. So thank you so much for taking the time here to talk this afternoon.
Tania Lombrozo: Well, thanks for having me. And you’ve given me a lot to think about and more motivation to try to get to the bottom of some of these issues.
Geoff Mitelman: Well, we hope that you can look at Tania’s work at the…
Tania Lombrozo: My lab’s called the “Concepts & Cognition Lab” at cognition.princeton.edu will get you to my lab website.
Geoff Mitelman: And you can see some of the work that she’s been doing. You can see our work on sinaiandsynapses.org, and you can see all of the conversations we’ve been having from Sacred Science. Next week on January 19th, we’re going to be talking with Professor Elaine Howard Ecklund of Rice University, who directs Rice University’s Religion and Public Life Program about the way in which religion is impacting so many of these questions, particularly on racial justice, that’s something they’ve been working a lot about. But she’s done tremendous work on the interplay of religion and science. So we hope you will join us next week. And Tania, thank you so much for the time here this afternoon.
Tania Lombrozo: Absolutely, thank you.
Geoff Mitelman: Thank you all.
0 Comments