Renowned bioethicist and TED Speaker Paul Root Wolpe, PhD gave a presentation at TEDxAtlanta. In his talk, he explores the ethical complexities associated with advances in AI, bio-engineering, neuroscience and gene manipulation and makes the case for why AI will also help us create a new ethical framework to address them. Dr. Wolpe is also a member of Congregation Shearith Israel in Atlanta, which is a participant in Sinai and Synapses’ project Scientists in Synagogues.
Read TranscriptFor 35 years I’ve studied the ethics of medicine, science and technology. And right now, we’re at a time when these questions are as important and as crucial for us to engage as any time in my career. Things like neurotechnology, genetics and biotechnology, artificial intelligence, are not only going to change society as a whole, they are actually going to challenge what it means to be human.
And what I want to do today is talk a little bit about how technology challenges us, how it changes us, how it changes our ethics, and how, ultimately, technology itself is going to give us new tools to deal with those very ethical questions through what I call “deep ethics.”
So these questions of the ethics of technology are in the news every day. You’ve seen them in your news feed, you’ve seen them in the newspapers. Perhaps you read about He Jiankui, the Chinese scientist who altered genetically human embryos, then a woman agreed to have these embryos implanted and gave birth to twin girls, the first human beings ever born who had their genomes altered by medical science. Or maybe you’ve heard of George Church, probably the greatest living molecular biologist at Harvard and MIT, who has decided to bring back, to reintroduce, an extinct species by cloning the woolly mammoth. He’s already gotten the DNA of woolly mammoths from remains in the permafrost in Siberia, he has spliced those genes into elephant cells, and the idea is eventually to clone the woolly mammoth, gestated in an elephant, and repopulate the tundra with this extinct species.
Or there was a lot of publicity about artificial intelligence and its bias. That facial recognition software differentially can detect people of different races, and misidentifies black women, Asians and others. Or mortgage AI programs that discriminate against particular zip codes, or an Amazon Human Resources AI program that was designed to help Amazon pick the best candidate from its human resource database, and ended up differentially choosing men. Or perhaps automated cars; these vehicles are going to be going down the road, and in a crisis, they’re going to have to make decisions about what to do – “do I crash into the wall, endangering my passengers, or do I turn left and hit those pedestrians?”
For the first time, we are going to have to create ethical algorithms. That is, we have to teach a vehicle how to make ethical decisions. For the first time, machines will be making ethical decisions that will have a profound impact on human beings.
Or Facebook, which is in the news all the time, about privacy issues and selling data, that had to deal with the fact that misinformation was used in elections, trying to sway elections, or just the very issue of our psychological dependence on our PDA’s and on social media.
These issues of technology are with us for a very long time, and they’re only going to get more and more important. We talk about them all the time. You probably talked about many of these kinds of issues over lunch today. We discuss them at the dinner table, over the water cooler, through our Facebook feeds, through Twitter. These kinds of issues are adjudicated in our court rooms and in our legislature. In fact, I’m a social scientist, and one of the things we understand in my field is that the way a society evolves ethically is through those very conversations – millions of conversations that we have every day between us, in the courts, in the media – is what drives societies forward ethically. It’s that ongoing constant ethical conversation.
So what I want to do now is take a step out and say a word about ethics, because I think we are disserviced by how we are taught about what ethics is and how to think about it. So what is ethics? Ethics is a discussion of values. I teach medical students all the time, and I give them a case and I say “what about this case is ethical?” And what they have to extract from it is what are the values at play in this particular situation. However, we are taught the wrong thing about this. What do I mean by that? Everyone in the audience can fill in this sentence: “ethics is about what is…” right and wrong. That’s what we’re taught. And in fact, it’s true when we’re taught it in 1st grade, 2nd grade – don’t steal, don’t lie, don’t hit your classmates.
But adult ethical decisions are not about right and wrong. That may be a shocking thing to hear. But in fact, if you think about the ethical challenges you’ve had in your life, or ethical conversations you have with your friends, it is almost always about two “right” things in conflict, two values, both of which can’t be honored. When I give the medical students those cases, they sound something like – “a patient wants to discharge himself against medical advice – you don’t think it’s a good idea, what should you do? Should you honor patient autonomy, a patient’s right to make his own decisions, or your obligation to do what’s in the patient’s best interest?”
That’s what adult ethical problems look like – two positive values in conflict. Should a judge show mercy or justice? Your friend cheats on a test – loyalty to a friend or obligation to the institution? Your partner comes out wearing an outfit that’s one size too small (laughter) – honesty or compassion… maybe just self preservation. (laughter)
These are the kinds of issues that adults have all the time, and it’s not just individuals, it’s on a broader scale, too. Does a society value individual autonomy or community need? And of course, we argue that in our political system. And it’s part of how we weigh these values, one against another, is what ethics is. And it differentiates one society from another, because there are many many different ethical values – justice, honesty, duty, transparency, and different societies create different calculus of ethical values. And that’s what is different between American society and Japanese society and Apache society, is that entire constellation of how we weigh our values.
Now let’s turn back to technology, because my argument is that technology has a profound impact as we move through time, on how we weigh values. And I want to give you a couple of examples. So the first example – plagiarism. We all think of what plagiarism is: when you take someone else’s intellectual property. Well, we may not realize is that plagiarism, as we think about it today, was a product of the 17th century. There really wasn’t plagiarism before that. Aristotle, for example, wrote that imitation was what he was hoping for. He was hoping that someone would come up and say a sentence that he got from Aristotle and say it as if it’s his own. That’s what he was hoping for. He wanted influence, not intellectual property, and that was the way a lot of the products of great minds were thought of before the 17th century.
So what changed? Well, on the one hand, there were a lot of social changes that led to that, but there was also an important technological change that led to that, and it was called the printing press. Because now, for the first time, you could lay out all of your intellectual ideas in a volume that could be mass-distributed and monetized. So now you had an investment in the ownership of the kinds of things that you had to say. And that’s how it went for three centuries. That’s what I was taught, it’s what I started out teaching my students. “Don’t plagiarize – here’s what plagiarism is.”
But then, something changed. (laughter) The modern digital age changes the nature of plagiarism because it changed the technology of plagiarism. In my childhood, or in my young adulthood, I would have to prop a book up, copy it down with a pen, or type it on a typewriter. It was a very intentional act. Now you can copy and paste in a second, or take music, or you can take a stream out of a piece of music and put it into another stream, and create a mash-up easily with tools that all of us have on our computers. It changes the nature of how we think about plagiarism. And plagiarism 30 years from now will be very different than it was 30 years ago. And we in the academy have to begin to think about how are we going to change our lessons about plagiarism.
So that’s an example from history. Now I’m going to give you an example in the future. These are automated weapons. That is, these devices have the capacity to roll into a war theater, choose an enemy combatant, and kill him. Those who advocate for these devices say “look, they’re going to decrease civilian casualties because these devices aren’t scared of being shot, they don’t get nervous, they have really accurate detectors, and they have really accurate shots, and so we will significantly decrease collateral damage.”
But what it also means is that we’re removing human agency from the decision to kill another human being. We are taking the human out of that decision. And that’s not just a small example in automated weapons. A very specific example – it is an even broader issue. Because if you read the literature in AI, we are moving towards what most AI experts believe is an era of super- intelligent artificial intelligence, where, because of its extraordinary ability to gather information from hundreds of millions, billions, trillions of data points, AI is going to exceed human intelligence. It’s going to be able to understand things we can’t understand, and because we can’t understand them, it can’t explain them to us. And it will begin to make decisions, and we are going to have to decide: do we just take those decisions for granted, even though there’s no human agency in those decisions? Or do we fight against that trend and assert what may be inferior decisions, because AI may understand this issue better than we do?
Back in January, Thomas Friedman of The New York Times wrote an article “Everything is Going Deep.” And even though it was only January of 2019, he suggested that the word of 2019 should be “deep,” because we talk about deep AI, and deep machine learning, and deep facial recognition. So what does this word “deep” imply? What it means is the ability to abstract complexity in new ways, to do just what I said AI does, to look at an extraordinary breadth of data and pull new insights from it. We didn’t have the tools to do that before. It used to be that complexity was a problem. Now complexity is a resource, and AI is going to be able to do that in new ways.
What I want to do is argue or suggest that AI is also going to help us with our problems through what I call “deep ethics.” How have we made ethical decisions in the past? Through human insight, through collaboration, through experts writing books and talking – this has been going on for 3,000 years. We still have that, and we’ll always have that, but I want to suggest that there are two new resources we have that might actually help us solve some of the very problems that technology is raising.
The first is what we might think of as collective intelligence. For the first time in human history, we can have conversations across vast numbers of human beings, each one one node, like one neuron in the social mind. People from different cultures, from different backgrounds, through social media, can create what we often call the “hivemind,” this ability to use the resources of human intelligence across the globe to solve problems in ways we never could before.
And then our second new resource is artificial intelligence itself. After all, it can look through millions, billions, trillions of items and try to understand commonalities between them that we don’t have the capacity to understand. Well, ethical decisions are data. So AI will be able to look through the entire history of ethical conversation. It can look at hundreds, thousands, billions of ethical decisions. What will it find? Will it discover new insights into ethics? I don’t know, that’s the point. It will understand them in a new way. And so we face a technological future of profound ethical challenges. But together, human insight, our collective intelligence, and the new tool of AI can help us all negotiate that technological and ethical future. Thank you.
0 Comments