Judaism has always celebrated the dynamic tension between the p’shat and the drash — the building blocks of our texts and how they are interpreted. Today, we see this dynamic play out in political perspectives, life experiences, and even sources of truth and information. Data doesn’t simply “speak for itself,” but rather gets filtered, changed, misinterpreted or even weaponized based on who’s hearing it, who’s sharing it, and who’s using it.
Professors Brian Nosek and Cailin O’Connor tackled these issues as part of our series “Learning from Scientific Experts for the Yamim Nora’im.” We want our policies, recommendations, sermons and teachings to be based on accurate data that will make a real positive impact on others, especially right now surrounding COVID-19 and racial justice. But we are the ones who interpret it. So how can we better integrate new and changing facts, as well as differing and challenging perspectives, as we teach, preach and connect with others — both on our sacred texts and the events of our own lives?
This webinar is presented by Sinai and Synapses, in consultation with the American Association for the Advancement of Science’s Dialogue on Science, Ethics and Religion, and funded by the John Templeton Foundation. It is run in partnership with Clal – The National Jewish Center for Learning and Leadership, the Central Conference of American Rabbis, the Rabbinical Assembly, and the Reconstructionist Rabbinical Association.
Professor Brian Nosek: Decisions Through Incidental Factors
These are two images taken on the same day in the aftermath of hurricane Katrina, before the rescue operation really got underway. You might recall how that was delayed and caused a lot of struggle and death. I point out these two in order to identify the corresponding captions associated with each of them.
So for the one on the top, “A young man walks through chest-deep floodwater after looting a grocery store in New Orleans on Tuesday.” The one on the bottom, “Two residents wade through chest-deep water after finding bread and soda from a local grocery store after Hurricane Katrina.”
Now this is an anecdote for a more broad investigation of how it is the perceptions of individuals may be different based on what are, presumably, incidental factors. So what the person is doing, even though ostensibly they’re doing the same thing, might be interpreted differently, based, for example, on the race of the person being perceived. So even though it’s an identical behavior, we may judge an African-American doing the behavior as doing something criminal, while whites doing the same thing as doing something that is survival-oriented. It’s also interesting to note that these captions, likewise, refer to the person at the top as a “young man”, and the person on the bottom as belonging – they’re “residents,” not criminals.
So if we ask the caption writers, “Did you use race to make the decision how to write your caption?” They would likely, and probably genuinely, say, “No, that’s not my job, my job is not to provide an assessment of people by race, my job is just to simply describe what it is you can see.” And they can genuinely and honestly say that while simultaneously being influenced by some of these assumptions, interpretive mechanisms, that they may not even recognize are influencing their judgment.
And we have these same sorts of mechanisms, not just for social perception based on gender, race or age, but how we consume information in general. So we and many others have done research about how our social judgment, our decision-making, is influenced by ostensibly irrelevant features.
Professor Cailin O’Connor: How Did Hydroxychloroquine Become Politicized?
So we’ll discuss a process by which people can become misinformed, even in the absence of what we might think of as traditional misinformation and disinformation. This is a process driven by these dynamics of trust and mistrust related to some of the things Brian was talking about. All right, so I want to give a little example of the kind of thing I’m talking about, and then sort of give you a hint of what our network modeling of this kind of example looks like.
So in February, this article was published by a Chinese research team looking at different treatments for COVID, and in particular looking at how antivirals would possibly inhibit the success of the virus in-vitro, so in the lab. And they found that, among other things, remdesivir and chloroquine did sort of inhibit the coronavirus. Now, on the basis of this finding and also general knowledge of antivirals, many doctors across the world, as the pandemic heated up, started prescribing hydroxychloroquine – a newer, kind of safer version of chloroquine.
Now, at the same time, something sort of stranger started happening on social media, and especially Twitter. So on March 11, a Twitter user in California, or sorry, living in China posted this tweet “Chloroquine will keep most people out of hospital. The US hasn’t learned about that yet.” He tweeted at a blockchain investor James Todaro. On March 13 Todaro hosted their slowing growing evidence that chloroquine is an effective treatment for COVID, and also links this Google doc this year and another person who’s a lawyer and then a random person – none of them are doctors – made arguing that chloroquine is a good treatment for COVID. Then Elon Musk, a few days later – this is the billionaire investor – tweets about this document and then follows up with a tweet saying “Well, maybe hydroxychloroquine, a sort of newer version of chloroquine, would be even better.” A few days after that, Fox News has Greg Rigano, this lawyer, one of the document’s authors on to talk about hydroxychloroquine. And then shortly after that, Donald Trump, the president of the US, starts endorsing hydroxychloroquine as, basically, a miracle drug. And then from there, what happened was that hydroxychloroquine, and its efficacy became this locus of disagreement, argumentation, anger and polarization in the US.
By polarization what I’m referring to are situations where you have some groups in a society that are holding stable, mutually exclusive beliefs, often even in the face of debate and discussion. So here we have liberals becoming very skeptical of hydroxychloroquine even though at that point it was actually a perfectly promising drug treatments for COVID, conservatives becoming big boosters of hydroxychloroquine, even though it was not yet fully tested and that it. And notice that this polarization about this kind of new emerging belief, it didn’t just happen at out of nowhere. It happened along these existing party lines. And so it became part of what we might call “belief factions,” groups in which those involved hold multiple, shared, polarized beliefs.
Read TranscriptGeoff Mitelman: Welcome, everyone, to our second webinar through Sinai and Synapses as part of our series “Learning from Scientific Experts for the Yamim Nora’im.” I’m Rabbi Geoff Mitelman, I’m the Founding Director of Sinai and Synapses, which bridges the worlds of religion and science. It’s incubated at Clal – The National Jewish Center for Learning and Leadership.
This is the second in a series of webinars for rabbis to gain some scientific knowledge for you to use in your sermons or communications, questions and decisions leading up to Rosh Hashanah and Yom Kippur. If you missed our first part, or want to watch or read the transcript of our first webinar, which was “Risks and Rewards in a World of Unknowns,” it’s online, it’s on our website at sinaiandsynapses.org.
Since COVID-19 hit, we’ve lost a lot of things. And I think one subtle piece that’s a huge loss, that we’ve lost, are those impromptu, unplanned conversations with people that we might not talk to otherwise, that would challenge us in a way that’s respectful, rather than potentially threatening. It’s like a line I heard from a board president once, who said “There’s the meeting — and then there’s the real meeting that happens in the parking lot afterwards.” We’ve lost the offhand conversation at the oneg about how someone’s sick mother is doing, or hearing what’s going on in someone’s life because they happened to walking in while you were making copies for Torah Study. Instead, I think a lot of our information and our conversations are now predominantly through social media, and any conversations are by Zoom. And the Zoom conversations are planned at a specific time, with a particular agenda. It’s hard to have any kind of back and forth “dialogue,” both because of the nature of the pandemic and also the limitations of the technology.
This often means that our sources of information are carefully curated – sometimes that’s intentional, but often it’s not because of whatever the algorithm is telling us we’re going to be looking at reading right now. The informal, unplanned, unscripted relationship-building are what greases the wheels of our social world. Now, in many ways, that grease is almost totally gone, and that’s ground all the wheels come to a stop, it’s locked us into our own perspectives. And this has made it harder for us to discover new facts that may challenge us, as well as ways to integrate those facts into our worldview which may then need to change. We’ve really gone from talking to each other to talking past each other, especially surrounding points of view and stories that make us uncomfortable. And even more disturbing, it’s now even harder than it had been for us before to be able to distinguish accurate information, accurate but changing information, well-intentioned but false information, and what we might call willful deception.
So that’s what we’re going to explore here today. Regardless of whatever topic you might be thinking about for the High Holy Days — you might be thinking about COVID-19, or racial justice, or free speech or “cancel culture,” or Israeli and American politics — you are going to be, obviously, looking for and using data and stories to share your message. There is a p’shat that you’ll be using for these topics, and that p’shat is going to form the basis for your argument. But unlike our texts, there are multiple different “p’shatim” on all the different topics that we’re looking at. We can find whatever data we want to be able to find to support the argument that we’re looking for. The line between p’shat and drash has always been a very fine one, and that’s also the case surrounding facts and data right now. And so as we go back and forth between our p’shat and drash, there’s an interplay between the facts on the ground and how we use them.
We know that rabbis honored when someone said something b’shem omro, in the name of someone else. Now, that was out of a level of respect for what that rabbi, but it also allows us to know who said what — and perhaps why. It becomes less about “truth” and more about “trust” — what sources of authority, and knowledge, and precedent do you trust? And why? And what happens if someone else trusts a different source? What happens if your source of knowledge makes you rethink what you had believed?
So as you plan your remarks for the Yamim Nora’im, in whose name, b’shem omro, would you be sharing those data and stories, and why? And as many people will be quoting you, and thinking about your remarks — some agreeing with you, some not agreeing with you and really challenging you — what do you want to make sure is both accurate and challenging to them as they think about their own truth and grow more fully as we enter into 5781?
Before I introduce our guest speakers, I need to thank a few of our partners. First, I need to thank our program administrator, Rachel Pincus, and our publicity partners – Clal, the CCAR, the RA and the RRA. I also need to thank the people who have supported Sinai and Synapses, including some of our donors, as well as the American Association for the Advancement of Science Dialogue on Science, Ethics and Religion, who’s our programmatic partner, and most of all, the John Templeton Foundation, which is our primary funder. You should know that through the Templeton Foundation and some of our other donors, Sinai and Synapses has an open application for a grant for $3600 for your community to explore Judaism and science, and the deadline for that is coming up July 23, so feel free to reach out to me if you have questions about that.
Our two speakers are going to present for about 10-15 minutes, and you can ask questions throughout the chat, and we’ll try to respond to as many as we can. And these two speakers are two people who are experts on how we discover facts, when those facts change, they also have explored why people believe what they believe, and why people believe and share misinformation, even if we don’t mean to. And it can help all of us think more clearly about how facts can lead to uncomfortable truths.
So first we’re going to hear from Professor Brian Nosek, who is the co-Founder and Executive Director of the Center for Open Science that operates the Open Science Framework. He’s also a Professor in the Department of Psychology at the University of Virginia. He received his Ph.D. from Yale University in 2002 and co-founded Project Implicit, a multi-university collaboration for research and education investigating implicit cognition – thoughts and feelings that occur outside of our awareness or control. He investigates the gap between values and practices, such as when behavior is influenced by factors other than one’s intentions and goals. Research applications of this interest include implicit bias, which I know a lot of people are thinking about right now, decision-making, attitudes, ideology, morality, innovation, and barriers to change. He applies this interest to improve the alignment between personal and organizational values and practices. In 2015, he was named one of Nature‘s 10, and to the Chronicle for Higher Education’s Influence List.
And then we’re going to hear from Professor Cailin O’Connor, who is a professor of biology and behavioral sciences, a philosopher of science, and evolutionary game theorist. She is Associate Professor in the Department of Logic and Philosophy of Science, and a member of the Institute for Mathematical Behavioral Science, at UC Irvine. She is currently co-administering the NSF grant “Consensus, Democracy, and the Public Understanding of Science” with philosopher of physics James Owen Weatherall (previous NSF grant Social Dynamics and Diversity in Epistemic Communities). Their co-authored trade book The Misinformation Age was published with Yale University Press. And her monograph The Origins of Unfairness was published in July 2019 by Oxford University Press. Her book Games in the Philosophy of Biology was published in the CUP elements series in 2020.
So I am now going to turn this over to Professor Nosek, who is going to share a little bit about his work and the implicit bias and the ways that we sometimes think in ways that we aren’t always aware that we think that we think. So I’m going to turn it over to him.
Brian Nosek: Thank you, I’m delighted to be with you today. I will also share my screen for a couple of images so that we can [for framing] the discussion. As was mentioned, my interest substantively is in the gap between values and practices – what we want to do, what we’re trying to do, what we think we should do, versus what we actually do, in our everyday behavior. And a big part of what it is we study is how it is we can end up doing things because of factors outside of our awareness and control, and take information that might be benefiting our current beliefs in order to reinforce rather than to challenge them.
So what I’d like to do for framing this discussion is start with something very basic. And what I mean by basic is a basic process that the mind has to tangle with in order to be part of the social world. And that is: How is it that we get information about what is out there, to become a representation in our mind, so that we can interact and make decisions about what’s out there? Because our mind is encased in the brain, which is encased in the skull, and the only way that we get information about what’s happening out there is through our sensory systems – touch, sight, hearing. And those sensory systems take in information in the forms that it’s delivered, convert it to electrical and chemical signals, and send that to the various areas of the brain that are responsible for turning that into an experience, a way of understanding, of representing, what it is that’s happening now.
That process is partly bottom up, in the sense that it takes these bits of information, of color, of location, of shading, of darkness or lightness, and pulls that in to try to construct an understanding of what’s out there. And it’s also partly a top-down exercise, where we have systems in our mind that impose interpretations on the information coming in. And those exist, very importantly, because the information that’s coming in bottom-up is often complex, it’s overwhelming, it has gaps, there are ambiguities. And so we need interpretive mechanisms to help us to see or to understand what it is the information is communicating.
But a consequence of this is that reality, and our experience of reality, are not the same thing. Those interpretive mechanisms provide a un-understanding what is not necessarily an objective understanding, in the sense of corresponding directly to reality as we experience it. And there are lots of ways to illustrate this. One of my favorites is to refer to some visual illusions that highlight how it is that our experience of reality and reality are not the same thing.
And one of them is this checkerboard by [Edward] Adelson, who is on the faculty at MIT. And he drew this to illustrate something important about perceptual systems, but the key part, for our purposes, are the squares labeled A and B. And what’s important about the squares labeled A and B is that they are exactly the same shade of gray. Now, of course, when you look at that, you say “No they’re not, they are not the same shade of grey.” But in fact they are the same shade of grey. And you may still say “But no, they’re not.” But they are!
So how can we know that they are? What I’m going to do – this is just in Powerpoint – I’m just going to drag this square labeled A and move the one up that’s B on top of A – and you can see that it’s now the same shade. I can put it back, or I can drag it down and put A on top of B. And you can see that it’s the same shade, right. Now I put it back and the illusion is still there, right.
This is a crazy illusion, because it’s so obvious that they’re different shades of gray, and yet they’re not. There’s other ways that we could see that, in fact, they are the same shade. Like if I advance to the next slide, […] I can connect the two with the same shade. The illusion goes away, but of course I remove it and the illusion pops right back. So the reason that this occurs, we could spend the entire time on, but the short version is that we have some information that we’re getting about the shades that are hitting our retina, and activating the rods and cones about the shades on A and B, and passing that along to our occipital lobe to process what it is the shades there are.
But there are also these interpretive mechanisms, these top-down assumptions, that are imposing on understanding here. One of those, of course, is that we can recognize this as a checkerboard. And so we know checkerboard – light, dark, light, dark, light, dark. It makes sense that A would be darker than B. We also know how shading works, that the green thing is blocking B, so it makes it look darker than it actually is. And all of those factors, our mind can take into account automatically, without us even knowing, to adjust our experience of what A and B look like.
What I’ll very briefly describe is that edges are very important in perception. So when a light and dark part meet, that edge gets amplified to help us distinguish where one object ends and another object begins. B is surrounded by dark edges, so our brain lightens it by comparison to amplify the edge. A is surrounded by lightness, so it darkens it a little bit to amplify the edges. All of these are things our mind’s doing in order to help with an interpretation, what it ends up giving us as an experience is that A and B. are different shades of grey, when we know that they’re not. They’re not.
And here there are two important implications for us for the purposes of this discussion. One of them is that the same thing, something labeled “A and B,” can be experienced differently because of the context around it. Context around it changes our experience of the thing. The second implication is that knowing that A and B are the same shade of gray does not change what you perceive. It does not change your internal experience.
And that’s because perception is not subject to reason. You don’t get to decide your experience. That is fed to you by all of these very mature, complex, usually very accurate things that our minds are doing in order to provide an experience of what’s happening. But knowing something can be different from the experience that we have. And so any effort that we have to treat A and B as the same is overriding the experience that we’re having. And these two important implications have a social correspondence. And you can think of similar images in recent issues that we’re dealing with in social justice. These are two images taken on the same day in the aftermath of hurricane Katrina, before the rescue operation really got underway. You might recall how that was delayed and caused a lot of struggle and death. I point out these two in order to identify the corresponding captions associated with each of them.
So for the one on the top, “A young man walks through chest-deep floodwater after looting a grocery store in New Orleans on Tuesday.” The one on the bottom, “Two residents wade through chest-deep water after finding bread and soda from a local grocery store after Hurricane Katrina.”
Now this is an anecdote for a more broad investigation of how it is the perceptions of individuals may be different based on what are, presumably, incidental factors. So what the person is doing, even though ostensibly they’re doing the same thing, might be interpreted differently, based, for example, on the race of the person being perceived. So even though it’s an identical behavior, we may judge an African-American doing the behavior as doing something criminal, while whites doing the same thing as doing something that is survival-oriented. It’s also interesting to note that these captions, likewise, refer to the person at the top as a “young man”, and the person on the bottom as belonging – they’re “residents,” not criminals.
So if we ask the caption writers, “Did you use race to make the decision of how to write your caption?” They would likely, and probably genuinely, say “No, that’s not my job, my job is not to provide an assessment of people by race, my job is just to simply describe what it is you can see.” And they can genuinely and honestly say that, while simultaneously being influenced by some of these assumptions, interpretive mechanisms, that they may not even recognize are influencing their judgment.
And we have these same sorts of mechanisms, not just for social perception based on gender, race or age, but how we consume information in general. So we and many others have done research about how our social judgment, our decision-making, is influenced by ostensibly irrelevant features. So a simple example is, you take a policy that could be the law of the land, and you ask someone “Please evaluate the quality of this policy. See if it aligns with what your beliefs are how do you think what you think should be done,” right? For example, should kids with disabilities be mainstreamed or have special education courses? There’s lots good and difficult arguments on each side of this issue, but if you assign one to be – but if assign inclusion to be proposed by a Democrat and special education to be introduced by a Republican, you will see a dramatic effect on people’s endorsements of other policies based on their own political identity. You ask them “Did you use the person’s political orientation to help decide?” they say “No no no no, I was reading through the policies, I’m really all about the policies.” Of course, then if you flip which one is which – who proposed which policy – the judgments also flip. And we have multiple lines of investigation to show that people cannot easily recognize when they’re doing this. It is much easier for us to recognize the biases in others than it is to recognize the biases in ourselves – because that internal experience, what I think I’m using to make my judgments, is so compelling, because it is my experience. And the factors might be influencing that judgment without me recognizing it are ones that I cannot see.
And so I want to close with just an implication point of this, is that there are lots of things and we hold people accountable for in terms of whether they are a good or decent person, a moral person, etc. I am accountable, if someone tells me “you’re biased,” that’s taken as a moral assessment. I have biases about how it is I evaluate information. But what the basic research in how people develop judgments, make decisions, take in information, it’s very clear that bias is an ordinary part of the mind. We didn’t develop to be unbiased. We developed to have biases in order to make rapid assessments of the information that we have so that we can make efficient decisions. And those mechanisms of our mind can be very over-enthusiastic. They can lead us to develop associations and assumptions that are different than our conscious values.
So the only way that we can really meet the promise of – this sense that being biased means you’re a bad person, is if we ignore the fact that we are humans. We can’t meet it. So that’s not the right standard for assessing whether people are good people or not, unless we want to just accept that we are all bad people. Instead, I think we need to think about things like bias and decision making and judgment and assessment like things that we know we need external help to correct, right. I wear contacts or glasses in order to correct my vision, and I don’t think that that makes me a bad person that I need these external devices to help me well with appropriate vision. I think we need to cultivate the sense that the same is true with our biases. I can’t see my own bias, but I don’t wish to have those ones that are counter to my values, so what I need is for you, for others, for the social structure and system, to help you identify them. Because it is it how I respond to them where I can really tangle with them, and then make decisions about how to address those issues or how I make decisions. And I’m going to end there.
So I think I have used up all the time that I have small because there might be happy to have discussion after Cailin’s presentation. Thank you.
Geoff Mitelman: Thank you Brian, that was fascinating. There’s a lot to unpack also here, as well. So as people have questions, you can type them in the chat here as we start to think think through some of these questions. But right now turn it over to you Ms. O’Connor, to Cailin.
Cailin O’Connor: Hi, well thanks and delighted to be here. Yes, I have some slides, I’ll just share my screen. Okay, yeah so I thought Brian – so Brian just talked some about the mechanisms by which people come to trust or not trust other people, endorse claims that other people make, believe information other people share with them, and a little bit about the psychological level on which that happens.
So I thought I’d kind of use that as a springboard to talk about what happens when we extend that to a societal level, so what happens when we have a lot of people using some of these kinds of judgments. Now Geoff mentioned, this but –I’m also having trouble with my slide. Just to give a little background, I had this book that I wrote in 2019 with James Weatherall, also he is my colleague here in the department of logic and philosophy of science, he’s also my husband. So in this book, we try to look a lot about why people have false beliefs and why false beliefs spread in our societies. And we used historical cases, and we also used models and simulations to study this propagation or spread of false beliefs. In particular, we use what are called network models. So what I’m going to talk about today is some of the work we did for this book, some of this modeling work. And what I mean by this is that we used computer simulations to try to represent what happens in real human groups when people are passing ideas or information to each other. Now in the book, we look at a bunch of different kinds of factors that matter to this process by which we come to adopt or spread information. So we look at things that you might think of as endogenously, sort of based in human psychology, things like “Who do we trust and why?” “Who’s part of our in group?” “Who do we create social connections with? Who do we try to conform to?” We also look at exogenous factors like “Who’s trying to mislead us or get us to believe different things?” And the stuff I’ll talk about today is more in this former vein, especially having to do with trust.
So we’ll discuss a process by which people can become misinformed, even in the absence of what we might think of as traditional misinformation and disinformation. This is a process driven by these dynamics of trust and mistrust related to some of the things Brian was talking about. All right, so I want to give a little example of the kind of thing I’m talking about, and then sort of give you a hint of what our network modeling of this kind of example looks like.
So in February, this article was published by a Chinese research team looking at different treatments for COVID, and in particular looking at how antivirals would possibly inhibit the success of the virus in-vitro, so in the lab. And they found that, among other things, remdesivir and chloroquine did sort of inhibit the coronavirus. Now, on the basis of this finding and also general knowledge of antivirals, many doctors across the world, as the pandemic heated up, started prescribing hydroxychloroquine – a newer, kind of safer version of chloroquine. You can see in this graph, through March and into early April, in some countries up to 83% of doctors were regularly prescribing this. And in basically every country surveyed, at least some good chunk of doctors were prescribing hydroxychloroquine. So this has come out in the medical community as something that’s possibly efficacious, and something we should take seriously, basically.
Now, at the same time, something sort of stranger started happening on social media, and especially Twitter. So on March 11, a Twitter user in California, or sorry, living in China posted this tweet “Chloroquine will keep most people out of hospital. The US hasn’t learned about that yet.” He tweeted at a blockchain investor James Todaro. On March 13 Todaro posted that there is “growing evidence of Chloroquine as a […] effective treatment for COVID,” and also links this Google doc this year and another person who’s a lawyer, and then another kind of random person – none of them are doctors – made arguing that chloroquine is a good treatment for COVID.
Then Elon Musk, a few days later – this is the billionaire investor – tweets about this document and then follows up with a tweet saying “Well, maybe hydroxychloroquine, a sort of newer version of chloroquine, would be even better.” A few days after that, Fox News has Greg Rigano, this lawyer, one of the document’s authors on to talk about hydroxychloroquine. And then shortly after that, Donald Trump, the president of the US, starts endorsing hydroxychloroquine as, basically, a miracle drug. And then from there, what happened was that hydroxychloroquine, and its efficacy became this locus of disagreement, argumentation, anger and polarization in the US. So for example, all over Twitter, you would see people sort of on Left Twitter posting about how this is just a nonsense thing to think will work, people on Right Twitter posting about how it’s a very effective, or very promising, drug. And so basically we end up in this situation where we have polarization over this. Okay, by polarization what I’m referring to are situations where you have some groups in a society that are holding stable, mutually exclusive beliefs, often even in the face of debate and discussion. So here we have liberals becoming very skeptical of hydroxychloroquine even though at that point it was actually a perfectly promising drug treatments for COVID, conservatives becoming big boosters of hydroxychloroquine, even though it was not yet fully tested and that it. And notice that this polarization about this kind of new emerging belief, it didn’t just happen at out of nowhere. It happened along these existing party lines. And so it became part of what we might call “belief factions,” groups in which those involved hold multiple, shared, polarized beliefs.
Now, a lot of social scientists have tried to think about group – have studied belied factions before, and what causes them, and why they form. Now, many of these previous explanations have appealed to what we might call “shared ideology” or other common causes. And in the case of ideology, the idea is something like “Well, we have people who have commitments to different ideologies, these lend themselves to adopting certain beliefs, and also lend themselves to joining certain political parties.” and so this kind of common cause, these features of people and their ideology, or sometimes people will say their genetics or their personality, that get you these kind of clusters of beliefs, these clusters of polarized beliefs, these factions. For example, George Lakoff has very influentially argued that in the US, you have conservatives holding to a “strict father” model and liberals to a “nurturing parent” model. And this explains differences in beliefs and opinions about gun violence, about abortion, about taxation, about basically everything. Now, I do think this kind of ideology is important in understanding why we end up with these factions and all this polarization, but I don’t think it explains every kind of case. And the reason the hydroxychloroquine case is really interesting is that there isn’t really any clear ideological reason to believe that it works or doesn’t work for COVID. And so it doesn’t seem to fall under this sort of explanation.
So in thinking about this case, my co-author Jim Weatherall and I have appealed to a paper we wrote before where we used network models to try to argue that well maybe if we just look at the way people trust each other and mistrust each other, this can explain how you can get this kind of endogenous emergence of belief factions, even without ideology. So a lot of researchers have modeled polarization in the past, and what these models usually include is a feature of the following sort: similarity of belief and opinion determines the level of social influence. So I tend to be more influenced by those with whom I already hold similar beliefs or opinions, or are in some other way part of my in-group. And in different models, this is instantiated in different ways, the way you do the network, the way people change their beliefs. But there’s always some kind of feature like this. And this can cause feedback loops where people start to diverge in opinion, and then as they do they stop influencing each other. And then you get groups with different ideas or different beliefs we don’t influence each other anymore.
So we thought maybe we can take a feature like this and use it to understand how you get these factions, multiple polarized beliefs following along these kind of these lines of division. So we looked at models that we tuned specifically to scientific beliefs, things like what we see emerging in the hydroxychloroquine case: Does this drug work or not work? We assume that the individuals that are models use similarity of belief to determine how much they trust information shared by others. So you tell me something, do I trust it? Do I uptake that? Well, in making that decision I ask: “Do I already share beliefs or opinions with you?” And I ask, “Can we get these kind of correlated, polarized beliefs just emerging endogenously as a result of social trust?” And we find that we can. In fact, we find this happens robustly across many versions of this kind of model.
And so just to give a kind of idea of what we see in the models, so we have a network model where the nodes of the network. Our little individuals, you can see them represented here by these little circles, and then the edges of the network are social connections between them, so connections that would allow them to share ideas or information. And we assume there’s already some existing political or belief divides. So some of these people are red and some of them are yellow, and they all kind of recognize that. And then we introduce some new idea, say, that hydroxychloroquine is an effective treatment for COVID. And so at first maybe some people in different parts of the network pick this up and think “Oh, this sounds plausible” and start spreading it, but what we show is that this belief starts to spread to some individuals, who will recognize that others who aren’t in their in-group believe it [and] will kind of drop it. And at the end, people within one group will be much more likely to pick it up from those in their own in-group. And you can end up with this kind of situation emerging where now a belief has attached itself to one group or the other, and not because there’s some ideological reason, but just because of the way people trust those who they consider as part of their own group.
Now, importantly, a number of the models that we talk about in this paper can be interpreted as tracking what happens when social identity, rather than belief, plays a similar role in trust. And here we’re talking about factors like race or gender or religion or nationality. So when people don’t trust those who share their race or their gender or their religion, you can have the same kind of situation where a belief can just, by accident of history, become attached to one identity group but not another, and mistrusted by the other identity group.
So I’ll wrap up with a few takeaways here. So one, in cases when no one is spreading misinformation, people can nonetheless end up misinformed because of this kind of belief actualization. We don’t have to appeal to ideology always to explain these kinds of factions, though it may still play a role, and in lots of cases it probably does. When polarizing figures like Donald Trump promote specific scientific claims, especially without proper vetting, they run the risk of polarizing these claims. So in a nation that’s highly polarized, and with these public figures who are very divisive, when they start to talk about science, then the scientific beliefs which on the face of it there’s no reason this ought to be a polarized belief, can then end up glomming on to certain groups. And along those lines, hydroxychloroquine seems to have been an example of just this. And could have easily been some other drugs, for example remdesivir, right. And with that I will wrap up, and thank you.
Geoff Mitelman: Thank you both. Cailin, that was fascinating and I think you know what you brought up so that’s really interesting, at least from a from a Jewish perspective of the relationship between what we call emet and emunah, emet being truth and and emunah being faithfulness or trust. And in Judaism, the idea of faith, it’s not belief with no basis. Emunah is really “I trust this person.” And so – and we see that a lot in English too, of what is the truth? It’s really the truth of the people that we trust. And that becomes a more social connection there. And so I think that’s a fascinating presentation here.
I’d love to hear – a couple questions are coming through the chat, by the way, if you have questions, you can send them either directly to me or through the chat here. But would love to hear from both of you a little bit about why it’s so hard to break out of these factions, right. Like, some of it is that’s what our experience is, some of it is the the social network there. Are there any tools or or skills that you’ve seen that have actually be able to break us out of that, or are we just kind of stuck in “We’re going to be trusting the people that we trust, okay, our implicit experience is going to tell us what we are, we can’t break out of that”? Are there any tools that we can have to be able to talk to people that we may not agree with, or may come at with with a level of disagreement?
Cailin O’Connor: Well Brian is the psychologist, maybe he is the more relevant expert here.
Brian Nosek: Well, I’m happy to say that we don’t know how best to do this yet. Actually, I’m not happy to say that, but I think that’s where the reality is, is that this is a very hard problem, the latter part of what you described. There is some evidence that people obviously can revise their beliefs, but the factors – it’s an uphill battle for a lot of these things, particularly when they’re reinforced by the social networks that we sit in, like Cailin describes. Because the factor that seems to be most effective in that is to be confronted with potential false beliefs from a trusted source. And if the source is already not trusted, already, that puts a huge barrier in front of how it is that I will ultimately revise my beliefs.
So a number of the interventions that are being attempted, at least – there’s still an emerging evidence base on how is it that you can first establish trust. So there are these factions, there are differences in opinions – “I already don’t trust that group, anything that the person from the other side says I’m going to distrust.”
Okay, we know that’s there. Let’s find another way to get people aligned. And you know, and there are different ways of doing that. One is to appeal to a subordinate group: “Look, the aliens are coming, we’re going to get destroyed, we all need to band together.” And so if we can all get together around “the aliens are coming,” maybe then we can start to see our common interests. Obviously, there’s more direct ways of “Let’s find something that we do align on.” “Oh, we’re both parents, okay, let’s start with talking to each other about the fact that we’re both parents. We both wrestle with these particular issues. Okay.”
If one can build trust on subordinate or specific goals, then there’s a lot more opportunity to start to introduce those areas of existing conflict. Then the hard part is to figure out, “Well, where is the right evidence? And what is the right evidence?” And then potentially, third parties can be very useful in parsing through that. So when science, as a community, for example, is trusted as a nonpartisan source, then it has some opportunity to push back against that factionalism. But if science gets politicized, oh boy, right, that’s another, bigger challenge. Obviously, in the current climate, that’s a big issue, is the extent to which particular claims are seen as scientists taking a side rather, than just trying to figure it out.
Cailin O’Connor: Yeah, related to what Brian’s saying, so this issue where if you’re just trying to convince someone of something and they don’t trust you, then one way people also talk about getting around this is by finding trusted spokespeople – as you know, as I like to build a bridge especially if you have a community of people who have false beliefs. So for example if you look at anti-vaxxing behavior and community data about vaccines, well, you’re going to want very different people to talk to the New Agers in California, to talk to Somali-Americans in Minneapolis, and to talk to Orthodox Jewish communities in Brooklyn. You’re going to want people who would be trusted by each of those communities to be the ones talking to people there.
Geoff Mitelman: So there was a question directed towards Brian, but I think it’s also for both of you, that I think is a really interesting question, which is about morality. Because moral judgment – I love John Haidt’s work, which says that “morality binds and blinds.” And very often, truth and justice can sometimes be in conflict, because if we say “I am working for justice right now,” it can come from an ideological perspective and prevent us from being able to pursue truth. So the question is: what inhibits or improves our ability – not just others’ ability, but our own ability, to remove the moral judgment-making in the process of identifying and addressing unwanted biases? If I think I understand the question,, you know, as you were saying, if you’re wearing your glasses, that’s not a moral judgment that you’re a bad person, or that you wear contacts. How can we remove that the moralizing language that’s needed to help us identify and address these unwanted biases?
Brian Nosek: Yeah, that is a great question, and it’s another one where the evidence is not definitive but there are a variety of different investigations that are converging on some potential for addressing this. The first is to normalize it. Almost all conversation about bias is accusatory. “I am saying to you that you have bias.” And inevitably that’s dividing, right, a way to divide people. An alternative strategy is to normalize it in the sense of “Here is something that happens to everybody.” And when I do a full version of the presentation that I give, the first half of it is all about normalizing the fact that these are ordinary operations of the mind, and in fact we need to have these biases in order to learn effectively. The same cognitive architecture that helps me learn that “When milk smells like that it tastes bad and makes me feel sick” is the cognitive architecture that get superimposed on “All women aren’t appropriate for those kinds of occupations, because I don’t see them in those occupations,” or “That must be true about that social group because that’s what I heard.” Even when I would reject that information consciously, I can’t help but form those associations automatically. And that’s where it comes problematic.
So normalizing it as a universal experience can at minimum give people some opportunity to be able to say “You know what, at night, when I’m all alone walking on the road, and I see someone that’s African-American, I do feel uncomfortable. I don’t feel great about that, but I feel it.” And that normalizing of being able to raise that discussion can be very helpful in promoting some degree of conversation. Obviously that’s not sufficient, because that’s only one element, but these are very challenging in terms of when it is one judging another, the barriers are going to be raised and be difficult to bridge. Cailin?
Geoff Mitelman: Playing off of that, this is also about our own desire to moralize. You know, it’s a language – and I hear this a lot from rabbis, understandably, right. Like, rabbis are moral voices, right. We talk about right and wrong, and justice and fighting. And I think that there are a lot of rabbis, understandably, who are trying to deal with fighting for racial justice. And so what is the the way that we’re motivated by our own desire to moralize, and, I think, use the language of right and wrong? And when is that appropriate to use, and when is that not appropriate to use?
Cailin O’Connor: I mean, this is something that’s super interesting and complex from the philosophical perspective, because if you look at our systems of ethics, usually they wrap up a lot of things. So they wrap up all together, “You’ve harmed someone, you’re morally blameworthy, you were deserving of punishment, and this was like a choice.” So when all those things go together in our normal systems, it gets very hard to talk about implicit bias and people’s blameworthiness, and know how bad they should feel, and should they be punished for their behaviors, and what responsibility do they have.
Geoff Mitelman: Right, that makes us uncomfortable, right? That becomes a fact, that you say “Well wait a second, how much free will is there?” And that’s something on Rosh Hashanah or Yom Kippur, how much we beat our breasts and go “Al chet shechatanu,” for this sin of this, for the sin of this, for the sin of this. Someone had phrased that we are now living in a world that requires requires atonement but doesn’t allow for forgiveness, which I think is an interesting framing that I’m seeing. And Brian, you wanted to jump in.
Brian Nosek: Yeah, that’s a very interesting point. Mine’s responding to something slightly earlier in this little thread, which is: there is a persisting recognition that focusing on behavior, on works, can be more effective in getting people to be able to have a conversation than focusing on personal identity. So this is pervasive – like in the clinical literature, if you want someone who is resistant to some kind of treatment and intervention, keeping the focus on the behavior. “You are good, you’re a great person you’re trying to do the best you can, this behavior is problematic, let’s focus on how we address that behavior,” is a way to separate in some ways that moral evaluation from what it is they’re doing and its problematic consequences.
Geoff Mitelman: And in some ways sort of, like, virtue signaling that happens, and being able to quickly say “Here, let me show how good I am without actually doing the work that’s often not very public, it’s often done under the surface.” There’s a question came up about the distinction between the bottom-up “hardware” decisions versus the top down “software” decisions, and what’s the distinction between a bias and a heuristic? If something is hard wired into our perceptual system, is there any way to control or influence the way it works, or at least control those mechanisms that we know are going to be there from the start?
Brian Nosek: Yeah, that’s a very good question. So I think what we would ordinarily interpret as the sources of bias are in the top-down mechanisms, the systems that are providing interpretation or filling in gaps or resolving ambiguities from the bits and pieces that are coming in through our sensory systems, just to understand what’s going on, right, so when we hear “The stuffy nose has problems,” did we hear “the stuffy nose has problems,” or “the stuff he knows has problems,” right? It depends on the context of how it is we parse that information. And so if the person was sneezing, okay, it must be about the nose rather than the “knows.” So the interpretation part is where we end up having the biases occur in how we process that information.
For whether we can influence or control it, this is the hard problem, and it’s a hard problem because we rarely know or concede when it’s happening. We only experience the outputs. There’s this basic understanding in the psychological literature that people don’t observe their mental processes, they experience them. So if you can’t observe them, it’s very hard to know the source of that in order to know how to intervene. But that doesn’t mean that we’re slaves to them and we can’t do anything about it. What it instead means is that we have to look at how it is we make decisions, what is the process by which we’re going to approach these in order to identify where our biases might exist.
So one of my favorite illustrations of that, in discussion, is deliberately having opposing points of view, right. So let’s invent a situation. It’s a hiring committee. We’re down to two finalist candidates. And there’s four of us that are on the hiring committee. Instead of us just starting to advocate for the person that we think is right for the job, we say “The two of you are going to advocate for candidate A, the two of you are going to advocate for candidate B, you’ll argue for about 10 minutes, and then you’ll switch positions and advocate for the other.”
And what a process like that can do is help to elicit the assumptions that people bring to the table, even if they already have a predilection, if their role is “advocate for this man,” then they might be more likely to observe that “oh, it’s really focused on her education but his experience. Should I be applying the same rule to her and him for that, or do we want to be considering them differently?” Making that stuff explicit then makes it a lot easier to deal with.
And of course, you know, there is, in philosophy there is a lot more practice, among philosophers, of playing with, challenging the conception, of being at a remove from what they answer is and being able to debate it. And that may be linked to more effective ways of dealing with these kinds of problems.
Cailin O’Connor: Just being a woman in philosophy, I’m not sure it is.
Geoff Mitelman: I’m curious to be able to hear a little bit more about how the subconscious manifests itself in interactions here. So you were talking, Cailin, about how there’s a network here, and people are not even aware of, “Wait a second, I’m going to make hydroxychloroquine into a partisan issue,” or mask-wearing, or different pieces like that. But there are all these subconscious factors that are really having a huge, significant impact on everything from Israeli politics, to American politics, to cancel culture – you know, all these different things that we’re not even aware of, and we’re sort of yelling past each other. What are the ways in which we might be able to integrate those things into a new kind of worldview, as opposed to saying “No, I don’t want to believe this, no, I don’t want to believe this?” How can we bubble up a little bit of the more conscious thinking to be able to help us say “Ah, wait, no, I wanna find out those contradictory perspectives”?
Cailin O’Connor: Well I know less about, you know, sort of subconscious factors and making them conscious, but I think part of what you’re talking about are these situations where you have all these people in a society interacting in many different ways, with their individual psychologies, and then sometimes you get these emergent higher-level phenomena, like a whole group of people who happen to hold the same beliefs about gun violence, and also hydroxychloroquine, kind of emerging out of this individual psychology. And when you’re thinking about intervening in these more emergent factors, you often have to think about interventions like change at multiple different levels. So you can think about individual psychology and how we can we can tweak and nudge based on the way people, you know, make their decisions, we can also think about network structures, you know, how do we build our social media platforms so that people come into contact with different types of information or different kinds of other people? How do we even set up things, you know, like how often a tweet can be shared multiple times? How is that going to change the way these processes happen? And so at each of these levels, there are a lot of details that people kind of worked out about how you can try to fix or improve informational uptake or trust between others. But yeah, I’d like to emphasize, it’s not all about just what we as an individual can do, but also about sort of shaping the spaces we’re in online and in person to set us up to succeed.
Geoff Mitelman: Yeah, I think that’s a key thing. Brian, you were going to going to jump in.
Brian Nosek: Just going to add, there’s a very interesting line of work on the role of perspective-taking in changing points of view. In fact, I have a link, I’ll put it in the chat. This is a link to a paper by David Broockman and Josh Kalla, who did door-to-door canvassing. And the intervention was to see if they could reduce transphobia among people. And the intervention involved a conversation where it was sort of a structured conversation in order to encourage someone to take the perspective of someone who was a trans man or trans woman. And they would do so in a way that connected the person who they were talking to, their own experience, with an experience with someone who is trans might have. And by listening to that person and working with where it is they came from, what what biases they have experienced, or things that they struggled with, and seen parallels in someone who’s different from them that they did not understand their experience, helped to make connections and and had a long term effect on reducing bias about that group. So there may be potential in extending those to other kinds of ways of drawing perspectives from people that we disagree with at the outset.
Geoff Mitelman: Yeah. And I think that’s how relationships come before ideology. So I think the last question that I want to ask – there’s one more question I’m going to ask, the last question here: What’s the relative importance or strength between social media and traditional media? Does one exert a relatively stronger effect on the individual or the network, or the network level?
Cailin O’Connor: So I’d say pulling these two apart is actually very difficult, because now it’s the case that the vast majority of traditional media is spread on social media. And most people don’t sit down and open a newspaper. I mean, some people do, but many, many people are reading all the articles they see through a filter of social media. So they go on Facebook or Twitter or Instagram or whatever, and they see what articles have been shared piecemeal by others on their social media accounts. And so they’re really – it’s kind of just an intermeshed process. Now, that said, you could definitely pull apart the kinds of effects that these two different sorts of media can have. So people in traditional media have control over the sort of content they produce, and this content can be more or less polarizing, more or less accurate, it can be, you know, sensationalist or not sensationalist, and all these things can really matter to how misleading it is. It can also shape how much it’s shared on social media and who reads it. So what I’m saying is these things are intertwined.
One thing that a lot of people have been talking about, in the sort of theory related to this, is that the existence of social media is creating new pressures for journalists, further pressures to make things that are clickbaity and attention-grabbing, because people aren’t [just] sitting down to one newspaper. And that can cause problems. And then, you know, the kind of effects that social media can have really have to do with the way people’s news feeds are curated by algorithms, the way people are connected to others on social media sites, who are they prompted to connect with or not connect with.
Geoff Mitelman: Yeah, I think it’s a fascinating piece of – there is a website, a guy, CGP Grey, who’s a good YouTube educator, and he wrote something a couple years ago called “This video will make you angry,” that the things that get you angry and riled up get more clicks and more shares, and that’,s you know, that’s where that the modernization comes in, as opposed to “Here’s a nuanced, thoughtful, reflective piece that I think is going to help elevate the discourse, that’s not going to necessarily generate hundreds of millions of dollars for a media conglomerate.”
Cailin O’Connor: Yeah, and to some degree there has always been pressure on journalists to make things that would make you angry or stir up your emotions or have shock value or seem really novel or sensational. We might say that that pressure has just been – the dial’s been turned up by social media.
Geoff Mitelman: So the last question I want to ask each of you to be able to do is: We’ve got a bunch of rabbis, we have a lot of rabbis, I’m sure, who are going to watch this afterwards as well – they’re going to be looking for data, stories, ideas, to come together, to inspire their community, and challenge their community. What should be some of the things that they are thinking about, both from a scientific perspective content-wise – what are some of the things they should be thinking about, to be able to find accurate information, but also from a process-wise [perspective]? What will help them make sure they don’t fall into the trap of sharing and disseminating false information, making sure that what they’re saying is factual and also presenting it in a way that they can maybe change their mind a little bit?
Cailin O’Connor: One thing I would say is that, when it comes to sharing stuff online and uptaking information online, it’s useful to have a point of view which is almost a little bit related to the point of view having to do with the bias. Like, you’re going to mess up, you’re going to be wrong sometimes, sometimes you’re going to trust fake news, sometimes you might be influenced by the agents of Russia, and it’s a good thing to sit with that and be willing to say like, “Okay, sometimes I’m going to mess up. I should get out there and say when I was wrong, I should correct my mistakes, I shouldn’t be defensive when others challenge me about things I posted on social media. Of course, I should do my best to learn from these mistakes and get better at recognizing what content is misleading or not misleading.”
And another thing that I would sort of advocate for, you know, especially people in a position where you’re communicating with many trusted members of a congregation, is that it’s sort of all of our responsibility to clear the litter out of our informational environments, create a situation where people in our country are able to get good information and form beliefs on the basis of it, where that shouldn’t be such a struggle. And so we should really be thinking of this as a responsibility of all of us, in the same way that environmental stewardship is our responsibility, that information environmental stewardship is also our responsibility.
Brian Nosek: I think that’s an awesome answer. I’ll just add to that, that a big challenge that we face is that most of the information that we get, especially about scientific claims, things related to pandemics, etc, is based on testimony. It isn’t my direct experience that informs my judgment on that, it’s what others have said, or what others have found, or what the evidence suggests. And so what that ultimately means is that a big part of how we evaluate information is the trustworthiness of the source. And how do we recognize the trustworthiness of the source? That’s a hard problem, because it’s already in those factions, we have sources that we trust and those that we don’t. But what if we wanna poke at whether the sources that we currently trust, or trust worthy and whether the ones that we don’t might actually be more trustworthy than you think, it might be a couple of things that we do of identifying what is it that makes something trustworthy and test against those sources. So for example, does the source ever correct itself and say “I messed this up?” If that never happens, that’s a red flag for whether that source is trustworthy. So part of why some media outlets retain some credibility is because they actually post corrections and errors. Another one is: does the source speak in terms of uncertainty? Every scientific claim has some degree of uncertainty. If a source says “This is true, this is false” and provides none of the qualifying information, that’s another red flag of “Maybe I need to check from other sources.” And of course, if there is convergent evidence across sources with different degrees of credibility, for me, that’s a decent signal that sources I trust and don’t trust are saying – as rare as that might happen these days – well that’s certainly a good signal for trustworthiness, and an individual piece of that.
Geoff Mitelman: Yeah, really, really important here. And there’s one last question that came up that I think is a wonderful way to end, because beforehand we were talking about this a little bit that Daniel Kahneman, who is one of the founders of a lot of these kinds of biases and heuristics, said “Yeah, I spend all this time studying, and I fall into it all the time.” So this is a question to the two of: given all the two of you know, where do you each wrestle with the very phenomena that you study, and what can be gleaned from that of your needing to not fall into that the false information trap, and your need to not fall into bias? Can you share a little bit, for just a moment or two, of your own personal struggles? Given that this is your professional work, what happens when you’re with your family and your friends and your own social media bubble, as well?
Brian Nosek: Yeah, I have an experience very consonant with what Danny describes as his, which is regularly being confronted with my own biases in judgment – “Why did you ask him the question rather than asking her? She’s the one that raised the point.” “Oh yeah. I didn’t even recognize that I did that.”
These [are the] sorts of little bits of exposures, of reminding me of this potential that I might be making these pre-judgments. So for me, the key thing that I have worked to internalize (and it is work), is to have someone introducing the possibility of being biased as not taking a defensive stance, instead taking an approach to this: “Really? Tell me more. What is it that I did?” Even if they are making a judgment and being accusatory about it, just trying to understand it both defuses the conflict in the immediate situation, and gives me an opportunity to test against where my values are. If nothing else, that is the one thing that I found to be useful, both for reassuring myself that I’m not just a terrible person all the time, and reaffirming that commitment to do good works, that ultimately it’s in my behavior over the long term that I will assess my integrity and value, rather than in the moment “Did I make the wrong call?”
Geoff Mitelman: Thank you.
Cailin O’Connor: Yeah. Applying this kind of thinking to misinformation and fake news, I guess I just assume that I will continue to fall for it sometimes, and indeed I have in the past, and do continue to, and when that happens, I try to likewise take a sort of moment and be like “Okay, it doesn’t mean there’s anything necessarily wrong with me, that’s normal.” And what I’ve got to do is think about “Okay, what would I do differently next time?” And also go ahead and announce, like, “I was wrong, people are wrong. I’m amending my post, turns out this thing was false.”
I also am trying to confront myself a lot with “How are my social identity markers changing who I trust, and why?” You know, am I rejecting information because it was shared by someone who I know is a conservative, whereas I’m a liberal? Or am I ignoring information from someone who just seems like they’re culturally very different from me. And I never think that I’m going to get to a point where I’ll be perfect with respect to that, but by continually trying to assess when my biases are coming into play, hopefully it will some continue to get a little better.
Geoff Mitelman: Well that’s the getting better, without getting “good.” There’s actually there’s a wonderful book by Dolly Chugh, The Person You Mean to Be, like trying to be a good person can actually be kind of counterproductive because then, well, what happens if I fail? But I’m trying to be a better person, to do a little bit better this time, you’re not quite as as a self judging, and it’s like “Right, I’m human!”
So that’s a wonderful way to be able to close. So first I want to thank our presenters, Professor Brian Nosek and Professor Cailin O’Connor, thank you both for your thoughtfulness, your insights, and I think even more importantly, for all the work that you’ve been doing leading up to today, because you spent so many years researching and disseminating this. So thank you for the insights, and the wisdom. Thank you to everyone who participated here, we’re going to be be sending out a short post-program survey probably later this afternoon or or first thing tomorrow morning, it’s going to be very short because we’d love your feedback. And we’re hoping that we’re going to do our third webinar in August about really the interplay of how COVID-19 is impacting our society as a whole. We’re finalizing all of that right now, so we’ll let you know when that’s locked in. And again, just a reminder that Sinai and Synapses has an open grant opportunity for our community to explore Judaism and science with $3600, with some mentorship and guidance from both us and and the AAAS. And if there are other questions that come up, feel free to reach out – I think you have our email address – and will I be able to relay that to Professor O’Connor and Professor Nosek. So thank you everyone, and thank you Brian, thank you Cailin, for an absolutely fascinating conversation.
0 Comments