New ways of meeting and keeping in contact with each other, such as social media, present us with a whole new set of information on which we can base our judgments of others. How have social rituals, like remembering birthdays, been changed by the enhanced prosthetic memory that digital communication affords us? How do we decide who is and isn’t trustworthy in this environment? And finally, what happens when the object of our judgment is a computer or robot – how do we assess trustworthiness then?
We spoke with Judith Donath of Harvard University’s Berkman-Klein Center for Internet and Society about trust, signaling and deception and its ability to both break and build flourishing communities.
Part 1 of the presentation is above; the rest are after the jump.
This program is part of the project “Science Education for Jewish Professionals,” a series of webinars run in partnership the American Association for the Advancement of Science Program of Dialogue on Science, Ethics and Religion, in partnership with Sinai and Synapses, hosted by Clal – The National Jewish Center for Learning and Leadership, and funded by the John Templeton Foundation.
Read TranscriptRabbi Geoff Mitelman: Welcome, everyone, to the second of two webinars – a webinar series run by Sinai and Synapses, and the AAAS Dialogue on Science, Ethics, and Religion, and CLAL, the National Jewish Center for Learning and Leadership, about using science as a Jewish Professional. My name is Rabbi Geoff Mitelman, and I’m the founding director of Sinai and Synapses, and with me here today is Judith Donath, someone I have had the great pleasure of learning from and with about the interface of humans and technology. And we’ll be talking this afternoon about “Costly Truths and Valuable Deceptions: How Communication Evolves in a Rapidly Changing World.” Before I officially introduce Judith here, I want to to give a few words of thank you, and I want to turn it over to our partner, the AAAS, here, but we have to thank the AAAS, and Clal and the John Templeton Foundation for funding this wonderful series of webinars and this whole initiative about science education for Jewish professionals.
So, Judith Donath synthesizes knowledge from urban design, evolutionary biology and cognitive science to design innovative interfaces for online communities and virtual identities. She’s the author of The Social Machine: Designs for Living Online (MIT Press) and is known for her writing on identity, interface design and social communication. Formerly director of the MIT Media Lab Sociable Media Group, she’s the creator of many pioneering online social applications, and currently she’s an advisor at Harvard’s Berkman-Klein Center, and is working on a book about technology, trust and deception. She has received her doctoral and master’s degrees in Media Arts and Sciences from MIT, and a bachelor’s degree in history from Yale University.
This presentation will focus on the importance of these topics in human flourishing, which is what the focus is of this series of webinars. So Dr. Donath, I’m going to turn it over to you, and we’re excited to learn from you.
Judith Donath: All right, well thank you very much. So we’ll see at the end whether you think it’s about enhancing human flourishing or diminishing it.
Prosthetic Memory
So, let’s start with this. This is a fairly old technology – this is the type of hand-written date book that was very common until recently. And one of the things I just want to note here is towards the bottom, where it says “birthday party”. This is from, I think, sometime in the 80’s. But I remember, year after year, getting a new calendar and writing in all the new birthdays every year. It took a lot of effort, but one of the things it rewarded was if somebody said “happy birthday” to you, you knew that they had actually made a fair amount of effort to remember your birthday.
Several years ago, when I was still teaching at the Media Lab, one of my students made a project where – it was a little bit like Twitter, in that you would sort of make these different entries and people would follow you, but instead of using text, it used graphics. And you could make a graphic about anything and share it with people, and you’d follow different people. And what I want to point out here is this one on the upper left that says “birthday.” It was one of the girls who was using this technology. And what this label is is, “how people found out about my birthday” – this is in 2008 – and two people found out about her birthday through her, a couple through another friend, two from her calendar, two actually remembered, and the rest all got it from Facebook.
So this is a fairly big change in how people remember something like birthdays. It doesn’t necessarily sound like a big deal, but it’s an example of the ways that a technology can have much bigger repercussions in terms of how we communicate than one would necessarily recognize. So today it’s one of the things that, for those of us who on Facebook at least, which is now, you know, a fairly large percentage of the world’s population, you get a reminder every day of all your friends whose birthdays it is. There’s a likelihood that you’ve had the experience of having, you know, tons of people send you a happy birthday greeting.
And one of the things that this has done is – it has certainly made it more efficient to remember things like birthdays, but at the same time, it’s kind of removed a lot of the meaning from it, because back in the day, when you had to remember the birthday, you – it was a signal that you had cared enough to find out someone’s birthday, to remember it, and to get in touch with them.
Today there’s, for instance, there’s programs that people sign up for that says “send all my friends happy birthday greetings on their birthday.” You certainly never miss the birthday again, but in the end, the actual meaning has been eroded. So, as an example, that’s also one of the ways it’s very easy, if I asked you “Is this a good thing or bad thing,” given the way I just told that story about the technology, it’s quite easy to say “well, this is just another example of technology ruining things.” “You know, all these these birthday greetings, they’re meaningless.” You know, “Technology has made this sort of thing empty.” And that certainly is one way of looking at it.
On the other hand, if you look at the ways people actually use this, some people use it automatically, on the other hand, many, many people say that they are in touch with hundreds of people they would have otherwise lost touch with, because of technology such as Facebook. And one of the functions of these things is that it provides the catalyst for people to talk to each other. If I haven’t been in touch with someone who are you or her, for several years, and they’ve written “happy birthday” to me, it would be hard for me to, out of the blue, start a conversation, but I might think “oh, you know, I should in touch with them.” It gives me that excuse to do it.
So even as a social intervention, it has changed how the ritual of the birthday functions. It has changed the meaning of remembering. It has made it easier to be in touch, while making remembering less important. So we can look at the technology as a change, but it is not necessarily for the better or for the worse. And you could tell that story either way, but the reality lies in the somewhat more complicated middle.
How do people make sense of each other?
So I’d like to talk a little bit about how I got into this kind of work, and the big picture that I’m interested in is in this question of “How does technology change society?”. And as Rabbi Mitelman mentioned, my background originally is in history, but I’ve also worked for many years as a designer, doing experimental social interfaces, social visualizations, etc.
And the big questions that I’m interested in with new technologies, the underlying ones, are: what do people really want to do? What are the things they want to know about each other? How do they want to interact? What are they trying to achieve? What makes society function well? And in terms of the technologies specifically, how do they actually use the technologies? How does it affect their interaction?
And for me, in my work, the key underlying question is around identity. How do people make sense of each other? When you meet someone, as you get to know somebody, what are the ways – when you think about it, we don’t really see that much of each other. You know, when you meet someone, you see something of their parents, we’ve known them for a little while, you know a little bit about what they do, but we somehow manage to form a much bigger, more well-rounded impression of each other through things we fill in, inferences that we make. We try and control how others see us by trying to manage that impression.
And I’m interested both in what is that praxis of filling out that view we have of each other, how it affects how we behave with each other – in particular, how technology changes this. Because if you think about the ways we meet or interact with people, for instance, online or through different technologies, the cues we get to see of each other are quite different. So while the people themselves are the same, the cues that we are able to perceive are quite different. And so in my design work, I’ve worked a lot in figuring out what are some interesting ways to add to that information, to understand the balance of how much should people be able to control their impression, versus when does that veer into the realm of deception?
And that tension, between making an impression and assessing it, I think, underlies a great deal of how we communicate, in ways that aren’t always well understood. In general, we want to make the most advantageous impression that we can. Now, by advantageous, that doesn’t always mean a good impression. There are times when somebody might want to seem intimidating or needy. It doesn’t necessarily mean you actually seem “at your best,” but there is some advantageous impression one is trying to make, and the others are trying to figure out what is actually going on, or what they can expect.
And this tension exists wherever there is some kind of different goals among people. It can be very, very subtle – you know, just the tensions that can be between friends with slightly different goals – or they can be a matter of life and death. On the subtle side, even if you see a friend, you may greet them quickly, say “how are you,” and your friend says, you know, “everything’s great.” And you look at them and you think to yourself, “maybe you look a little bit tired.” That’s, you know, a simple everyday – we have encounters like this many many times a day, between “maybe they’re just a little tired, it’s not a big deal,” [or] “maybe there’s something bothering them that they’ve chosen to hide.”
Costly Signaling
But we do have a sort of tension between how we are perceived and how others want to see us. On the other hand, this kind of tension between impression and the perception of others can have even a life-and-death difference. A sort of also, seemingly mundane, example is – your doorbell rings, and it’s someone who says “Hi, I’m from the gas meter company, and we need to check something – check the readings in your basement.” Do you let this person in?
You know, most of the time, this is a completely innocuous interaction. But a couple of years ago, I know, in New York, in Queens, there were examples of people who are coming into rob houses, using the excuse that they were meter readers. So there, you have a false impression, where mistakenly believing it could have very dire consequences.
In the world of most communication research, these issues around sort of impression-making and assessing, and particularly that deception element of it, haven’t been all that central. If you look fields such as semiotics and linguistics, it really doesn’t come into play at all.
But one field where it’s been very central is in the world of evolutionary biology. And there, among the world of animals, there’s certainly very similar tensions. Here’s an example, it’s a quite interesting one. This is a gazelle and it’s doing something called “stotting”. And this is the reaction that Thomson’s Gazelles and some other forms of gazelles have when they see a predator. It turns out that the strongest of the gazelles do not do, and the fastest ones don’t do, what you would expect them to do when they spot, say, a set of dogs that are predators to them coming. They don’t run off as fast as possible. The fastest ones stay in place and jump up and down for quite a while.
And what’s interesting is, most of the predators don’t run after the ones that are doing that, even though you would think “this is a rather tantalizing meal sitting here.” You know, “other gazelles are running away, and here’s one that’s just staying in place.” And the way that biologists understand this now is that stotting is what’s called a “costly signal.” Which means that it is signaling that it is the strongest and the fastest, and it is doing that by spending – wasting a resource that it has.
In this case, the resource is time. That if you are a slower gazelle, you don’t have that time to be in one place; you have to take off and run. And it turns out that the predators do seem to understand this, and it looks like they have – it’s not something they’re inborn learning, but they’ve learned it from experience – they don’t go after the ones that are stotting. They know that once they get close enough, they will run off, and those are so fast that they can escape.
What’s also interesting is that it’s beneficial to all the gazelles, because it also works very well as an alarm call. Now, if you’re a slower gazelle, you can’t afford to do that. And so something like that, where you can signal, in a way, that you have some resources by wasting [them] to show how much your abundance of it is, is one example of a form of communication that is inherently reliable.
And so, what I’m interested in doing is looking at how we can take this model of communication, which is effectively a economic model of what keeps communication honest enough to function, and apply it to a larger picture of human communication. And I’ll give you a very quick introduction, because I think we don’t have time for a full – that’s why I’m writing a book – a full introduction to the theory of signaling. But I’ll give you some of the basics and then we’re going to look at some examples around the areas of ethics and technology.
And so, first, if we think about “what is a signal?”, is that most of what people or animals want to know about each other are qualities that are not immediately apparent. In this case, it’s how fast you are. It could be how kind you are, how smart you are, any kind of intention that you have. But we rely instead on perceivable signals of these otherwise hidden qualities. And signaling theory is interested in the relationship between the signal and the quality. In particular, why some signals are reliable indicators of others, whereas others may still be used, but they’re less reliable. And basically, a signal is reliable if it is – if it can be given, if it’s affordable to be given, by someone who does it honestly, and unaffordable to those who don’t actually have quality.
And there are all kinds of other elements that go into this economics, because sometimes we use signals that are not – there’s not such a strict, hard cutoff, and so some may be less reliable, they rely on the receivers to punish those who have turned out not to be as honest as they would like them to be. Some of it just can be accepted because it may not be so bad to be deceived. While this theory has been used a lot in biology, it’s starting to come into the world of technology, etc.
I think one of the more interesting examples is a researcher named Richard Sosis, who has looked into religious communities, particularly – especially Jewish communities, but a whole number of different religions – his work is in the evolution of religion. And he’s interested in “What is the value of the costly signal?”.
So one of the examples of what he looks at are things like “why do particular religious groups have what’s called ‘costly badges’?” So, for instance, here’s an example of an Orthodox community where people dress very differently than everyone around them. They, you know, if you’re in Israel on a swelteringly hot day, and there’s a number of people wearing fur hats around you, it’s clearly some kind of costly signal of commitment to a particular way of identifying yourself.
And he looks at the value of how the particular costliness varies by the amount of trust within these different groups. He has a paper that’s probably one of my favorite titles for a paper ever, and it’s called “Why are Jewish synagogue services so long?”. And his explanation of it, among other things, is that the length of the service seems very long if you are an outsider. If you’re not learned in Jewish theology, and you walk into a synagogue, and, you know, it’s a Saturday, and it’s five hours in Hebrew, much of it seemingly mumbled, it will be five hours in which you’re sitting there, fairly confused. Even if you understand the Hebrew, but you don’t really know the material in depth, it would still feel like it’s going on for a very long time.
If, on the other hand, you have spent your life studying the minutiae of the Mishna and the Torah, and you know every week’s reading very well, and you know the slight differences in prayers that are said from one holiday to another, that five hours goes much faster because there’s so much more of interest to you, and so one of his theories is that, as a costly signal, that’s one of the ways it can differentiate, because the time spent is perceived very differently by those who are actually insiders and those who are not.
And I think you can see similar examples outside of the world of religion. If you look at, say, avant-garde music performances, if you go to BAM or something, and there is five hours of fairly atonal music opera – if you have studied this and you really know this material, and you’re familiar with the story behind it, and all kinds of pieces, it can make that five hours interesting. To somebody who doesn’t particularly like that, but, say, wants to appear that they are part of that particular subculture, it’s a much costlier experience in terms of their perception of how the time is spent.
The Demand for Lies
And so what I want to talk about the rest of our time is a question I’ve gotten very interested in over the past several years, and particularly in the last year, but some of the technology issues have been going on for quite some time, which is that if you look at something like this kind of economic – what I’ve been calling the economics of honesty, a lot of the assumptions, certainly in the world of biology, but even in anthropology, and most people who look at any issues around honesty, is that being deceptive, we understand, can be beneficial for those who are being deceptive. There’s all kinds of reasons why you might want to lie, it could be very advantageous, too, to convince other people that you’re nicer, better, smarter, than you are, that your intentions are good, all kinds of things. But we generally assume that it is not to someone’s advantage to be deceived. And that the recipients of any kind of dishonesty are not happy about it.
But looking at a lot of things in human behavior, I’ve come to feel that that’s a fairly naive view of how human society actually operates and how people work. And so I’ve become interested in looking at – when it appears that there is almost a demand for being deceived, and what can that – what can the benefit of that be? And so, we’ll start a little bit with our contemporary politics, and then we’re going move on to robots.
I’m sure many of you may remember that, last January, Sean Spicer said “this was the largest audience to ever witness an inauguration, period.” And the picture on the left is the inauguration he is speaking of, and the picture on the right is the same place four years previously, at Obama’s inauguration.
Clearly, this is not the largest audience to ever witness an inauguration. We can understand why one might want others to think their inauguration is the largest ever. It says they’re important, and they’re big, and that they’re popular, but given the clear evidence that this simply was not true, why would they not only say it and then double down on it, but then why would many of Trump’s followers agree with that assessment and repeat the story that this was the largest inauguration ever, when it clearly wasn’t true?
Or another example is Kellyanne Conway, who, defending some of the immigration rules, cited “the Bowling Green Massacre.” And it turned out there was never any Bowling Green Massacre, it was completely made up, and when she was finally forced to admit that this didn’t happen, said that this was an “alternative fact.” And the whole concept of “alternative facts” is a fairly interesting one. There aren’t really – in the world of factual things, there aren’t really [those kinds] of alternatives. It’s not an alternative fact which of these was bigger.
But it turns out that making these kinds of statements worked quite well. It got a lot of people very fired up in defending them, it didn’t seem to budge very many people, certainly within Trump’s base, though from another perspective it looks like they had been lied to. So why would that be – why would it still be popular, why would they continue to forward these stories on?
And I think that the signaling theory gives a very interesting way of interpreting this. If you think about – and it’s to think of this as “how does it function as a way of bonding people to a particular group?” So if you think about it, if I say “the sky is blue,” and you look up and you say “yep, the sky is blue,” we’re in agreement, we’ve agreed on a fact, but I haven’t – it doesn’t really bind us, because it doesn’t say anything about us as a group, or having something in common, that we agree on something really true.
But on the other hand, if I say, on a day like today, I look up and I say “the sky is purple with yellow stripes,” right, and you say – you look up and you say, “why yes, it is purple with yellow stripes,” then we’re a group, we’re kind of like a cult. We have this belief that other people do not share. And if I can convince you to also go along with my story, even if on some level you know it’s not true, but we all decide to agree on it, and we find some way to let ourselves get past what’s often called cognitive dissonance around the difference between our perception and facts as the group leader tells us to believe, it’s a very, very strong bond. Because you have, now, a world of beliefs that you share, and outsiders do not share. And so by those beliefs, it binds you together, it tends to repulse outsiders, so you create a very close-knit group. It also helps to identify those who – it functions as a sort of loyalty test, that the leader or leaders of the group can come up with different tales that are at odds with your perception, and you will go along with what your leader tells you to perceive, as opposed to what your senses tell you to perceive.
And I think one of the other pieces that makes this very related to a lot of the issues around fake stories, etc, that we have been seeing in the last year, is that if you look at the difference in how things like news are used by people – when you’re reading a newspaper by yourself, and it’s not a social experience, you’re kind of reading for information. You may be talking to people about stories later, but it’s primarily a personal information, about gaining information.
As news has moved into something that people read online, and sharing news online has become a more important activity, it changes the types of stories that are useful to people, because, if you – when you are looking to have this kind of group bonding, when you have fake news stories that go with a particular subgroup’s belief, and the members of that group post those stories, it’s actually more valuable from a group formation standpoint that they have these fake stories than if they’re real, because if then if it’s a real story, it really doesn’t bond the group together. All kinds of people might agree with it or discuss it. But it’s not a binding piece, to the extent that if you post something that’s at odds with what anyone who is outside that group believes, you mark yourself as a member of that group, and it’s a way of publicly displaying your affiliation.
And so, there are a number of issues with the way we deal with information online, and how that’s changed news and news perception, and this has led to this kind of epidemic of fake news, but one that I think hasn’t gotten enough attention is the importance of this particular phenomenon of social sharing and the value of fake news for that purpose.
Robot “Minds”
I’d like to go on from here, and then just look at a somewhat different example of our appetite for deception, which is the world of robots, and whether robots have minds. And again, this is something I’ve been interested in for quite some time. And though it’s become increasingly newsworthy in the last couple of years –you’re seeing a lot of stories now about AI, and “when will AIs become intelligent enough to actually really have feelings? Is that possible?”, and we’re also seeing a lot of robot-like things in the home. So it’s an issue that has been sort of a big philosophical question – certainly for a long time, you can look back to the story of the Golem.
But in 1950, Alan Turing, who was one of the – he was sort of the father of computer science – wrote a somewhat more speculative paper in a philosophy journal called Mind on the question of whether computers could ever be intelligent. And I’m sure many of you heard of the Turing Test, which is whether you can tell that a computer is human or not. And that is a test that he devised, because he, pretty quickly into his paper, said “well, the question of whether machines can think is meaningless.” We really can’t see if they’re thinking, you can’t look in another’s mind, so we can replace it with a behavioral test, where if you have a computer that is typing answers, or it might be a computer, it might be a person, it’s behind – you’re just seeing typed answers, you ask it questions, if the computer is able to fool you into thinking it’s human – when you’re not sure whether your answers are being made by a computer or a human – once there is a computer who can fool you into thinking [it’s] human, then we need to say that computers can think.
And a few years after he wrote this, a computer scientist named Weizenbaum at MIT wrote a program called ELIZA. And ELIZA was a chatbot, and this is an example of how ELIZA functions. There’s versions of ELIZA that are online today, you can try it yourself. And ELIZA described herself as a psychotherapist. You can type things at her and you get some kind of reaction back.
Now, this was made in 1963, and Weizenbaum did not make this in order to fool people. He made it in order to show that the Turing test was not a good substitution for the question “Can machines think?”. So, you know, the type of computer that he was using to get this sort of dialogue going, you know, it’s probably not even in your phone, it’s probably, like, in your watch today. But what he wanted to show was that something that was very simple (it’s essentially just a sentence parser) could give the illusion of being a person very, very easily. And in particular, if you give it the right framing. This one when introduced itself as a psychologist, and for that, that gave an excuse for basically being a parser that took your sentences and had a few sets of rules for how to turn your question into a different question, or your answer into another question. But as a so-called psychotherapist, that might seem believable.
And so his intention was that people would to see this and they would say, “Oh, you’re right, this is not a good way to judge machine intelligence, forget about that.” That was not at all the reaction that he got. One of the things he wrote up afterwards was that his secretary, who had watched him programming this and knew how it worked, when she tried it, asked him to leave the room because she wanted to talk to it.
And then people, other people, took the program up and said “This is wonderful, we can now use machines as psychotherapists instead. This is great, everyone can have their own robot therapists.” Weizenbaum was so dismayed by this reaction that he stopped working in the field of computer science entirely and devoted – he was a full professor at MIT – but he devoted the rest of his career there to writing about, basically, the dangers of dehumanization that technology could cause. And that this was an example – this was so disturbing, because it was an example of us being so willing to turn over to something that was clearly a machine, and to see our humanity as being so machine-like, and to discard the value of anything such as empathy in a relationship, just for the experience of getting the words right.
This is the program behind – part of the program behind ELIZA. What’s interesting is that this is [in] some research that’s going on today. You can add some avatars in, etc, but this is work that’s now being done with trying to make systems that, in many ways, are very similar, they essentially are more sophisticated parsing systems, but they certainly aren’t empathic in any way. But to make, in this case, therapists for veterans, where they have so many veterans who need psychotherapy, one of the – there’s research work saying “well, you know, if people can be treated by a computer, that can appear to be empathic, you know, it’s much cheaper, they can afford it, is it better than nothing?”
And so part of the questions I just want to raise here are – we can take Weizenbaum’s stand and say “this is terrible, it’s unethical, actual empathy is really important, trying to fool people into thinking they’re talking to something that cares about them is morally wrong.” [Or] we can take the view of the people who are making this, many of whom feel that they are serving humanity very well. They’re saying, “well, you know, therapy is very very important, a lot of people can’t afford it, there’s not enough people to go around, if we can make a system that gives people – even if on some level they know they’re talking to a machine – but it’s easy to get caught up into relating to it, and to feel that it is really paying attention to you, even if at some level you know it isn’t quite doing that, if it’s helpful, it’s just helpful,” and that’s where the underlying ethics come in. So, we should be doing stuff like this, we should be making this sort of thing available to people.
And then there are a number of other questions to ask about what’s the different – you know, what is the role of empathy in this type of experience? I once gave a talk about this to an audience of psychotherapists, which was very interesting, and one of the questions was, “what do we know, what does just the patient know, about what is going on in the mind of the therapist?” What is our sense of the truth and falseness in that relationship? Because on the ideal end, the patient is sitting and talking to somebody who is deeply empathic and thinking of them, but on the other hand, what if the patient is actually sitting there. talking to somebody who’s thinking to themselves, “oh my god, do I have to listen to this person whine about this again and again and again? I’m so tired of listening to them, when will this hour be over?”
So even that question of empathy doesn’t simply stop by saying, you know, “there is a human here.” Somebody who finds that [to be a] very self-conscious experience or excruciating, or is very insecure and is always afraid that that’s the reality, might feel much more comfortable talking to a robot. So that would be a much better solution for them.
So there are a number of questions here, in terms of what are the ethics behind the different technologies that induce us to have some kind of personal relationship with something that can talk to us in a humanlike way.
0 Comments