Deep Ethics in the Age of the Algorithm

Deep Ethics in the Age of the Algorithm

I am trained as a social scientist – not as a philosopher or theologian. I have a unique definition of ethics, different than that of my colleagues in other fields: I define ethics simply as how we determine, assess, and express our values in the world. I believe almost every decision you make in your life, large and small, important or trivial, at some level expresses your values, and so is an ethical decision.

What I want to talk about today is a new ethical challenge, one I think of as an ethics “game-changer” that promises to alter the very nature of ethical conversation and scholarship: The development of Artificial Intelligence and the coming Age of Moral Machines.

Let’s look at that together.

The story starts with self-driving, driverless, or autonomous cars; you may be familiar with the ethical conversation about them. Autonomous vehicles are already on our roads and may be quite common very soon. Uber, Lyft, even Domino’s Pizza are planning to phase out human drivers for driverless delivery. Long distance truckers may soon be using driverless vehicles for long-distance hauls. Cars are an underused asset for most of us – they sit idle most of the time – so calling for a vehicle only when you need one is more efficient. Soon we may stop owning cars altogether and simply call for an autonomous car when we need one.

As you know, driverless cars use sensors to perceive the world around them – other cars, pedestrians, obstacles. Then they use complex algorithms to decide when to stop, turn, slow down, etc.

However, about five years ago it became clear that driverless car posed a new kind of ethical problem. It has been discussed widely, and the scenario goes something like this:

IMAGINE: A car is speeding down a highway, and all of a sudden a situation develops where the car realizes instantaneously it is in a complex situation and is going to crash. But now it must choose:

It could go straight and crash into pedestrians. But who should it choose to hit?

The mother and her kids? The businessman? And should it even consider killing innocent bystanders, or choosing between them?

Or, it could veer left and hit a group of nuns crossing the street. Should it even be able to tell that they are nuns? Does that matter? How much discretion and ability to identify human types should we program into a car?

Or, should the car veer and crash into that wall, likely killing all the riders in the car? After, it is their car ride that put everyone in jeopardy. However, would you even purchase a car programmed to harm its passengers first, knowing that your loved ones, your children perhaps, will be in that car?

Or should it jump the curb and hit John, Ringo, Paul, or George, depriving us of years of classic music? What should a car know about the people it may hit? E.g., should it use facial recognition?

Why is this so important, and what makes it a fundamentally new ethical question? It is that the CAR ITSELF must make the decision, and the decision cannot be avoided. For the first time in human history, a machine must use its programmed algorithms to make a moral decision with real world consequences. Welcome to the AGE OF MORAL MACHINES.

But is that what we want? Do we really want our car to scan who is in the crosswalk, and decide who will live, and who will die?

Such decisions are programmed into a car through algorithms. Algorithms are simply a set of instructions – a recipe in a cook book is an algorithm. But algorithms in AI are extremely complex, and we have gotten so skilled at creating and combining detailed algorithms that AI is getting close to simulating many aspects of human intelligence. Computers are now learning and developing at an astonishing rate.

In fact, they can already beat us at our own games. IBM’s Deep Blue computer beat Gary Kasparov, a world chess champion, all the way back in 1997. Another IBM computer, Watson, beat Jeopardy champions in 2011. Google’s AlphaGo beat the South Korean champion at the game of Go in 2016; just a few months later, a new version beat the human-defeating version in 100 straight games. AI also, by the way, bested champion poker players at a Texas Hold’em tournament last year.

Siri and Amazon Echo and Google home can answer sophisticated questions. Two chatbots on Facebook started speaking to each other in English and eventually developed their own language that no one else spoke to communicate with each other. Perhaps even more, some computers learned how to write code – basically how to program themselves, and, eventually make their human programmers obsolete. Computers will soon program computers.

But algorithms are also strangely limited.

Algorithms can only remain within their domain – Deep Blue, the computer that beat Gary Kasparov at chess, cannot play checkers. Computers do not assume, infer, or intuitively understand anything. They only understand what is programmed into them. They can make profound mistakes because they are driven entirely by their programming.

Let me give you an example: Some programmers were practicing programming by writing algorithms for computers to play tic tac toe. As we all know from childhood, if you are halfway decent you should be able to play every game to a draw. And so it went. Until all of sudden one computer started winning. They looked into it and realize that the computer learned how to turn the other computer off. Because it went off, it forfeited, and the first computer won.

Now imagine if I offered to play you in tic tac toe, punched your lights out, and when you awoke said “well, you forfeited that first game, wanna try again?” We don’t need to be told that is an illegitimate tactic because we have a storehouse of knowledge of how to play games and treat each other. Why didn’t the computer know that? Because it was not told explicitly not to do that. The computer did exactly what it was programmed to do: it won.

Ethicists spend a lot of time debating how to solve ethical problems. And we often do not agree. But the driverless car will have to make a single ethical decision instantaneously. And that decision will be predetermined by the car manufacturer when the car is produced. In other words, we are in need of developing ETHICAL ALGORITHMS.

How do we teach morals to a machine? Can AI have deep ethics, the way we can? Artificial intelligence has proven that machines are good at learning facts, strategies, tactics. But can they learn values, have empathy, develop intuitions, have compassion? Machines can clearly learn, but can they undergo moral development?

And if you think the ethics of autonomous cars is thorny, imagine trying to determine the ethical algorithms for autonomous weapons, that can determine when, how and who to target at without human input. They exist now. Few have been deployed, but that will change shortly.

Should we cede life and death decisions to machines? Should we allow robots to decide for themselves when to fire, what to fire, and at whom? These are fundamentally ethical questions, decisions that will radically change the nature of war itself.

The questions are not simple. Driverless cars will cause fewer accidents and save more lives. Autonomous weapons may have greater precision, and less bias, and may cause fewer civilian casualties. But some will die at their hands. Or claws.

Are we ready to give over human ethical agency to machines? And if we do, and that machine makes a bad ethical decision, who is at fault? Who is responsible for the decision that driverless car made – the car’s owner who chose to buy that car, or perhaps an agency that owns it and rents it out? The car manufacturer? The engineers who wrote the software?

Everyone in the AI world is talking about these problems now, and there are articles and books galore debating them. And what happens as computers surpass us in intelligence, become able to understand and solve problems that are beyond our comprehension? Some see that as the doomsday scenario, when humans have actually programmed their own successors, have made themselves obsolete, have finally created, as Barrat says, the end of the human era through Our Final Invention.

But don’t write us off just yet. What makes human intelligence so powerful is not only our raw intellect, but how we combine it with other traits – intuition, emotional intelligence, experience, empathy, insight, deep values. We have a word for all that: we call it WISDOM.

Imagine it is your decision to make. You have to decide how to encode moral decision-making into the driverless car, or the automated weapon. What would you want to know? What fields become important? Literature, history, philosophy? Would you want to know the philosophical and religious traditions where these kinds of questions of morality had been debated for millennia? Is science enough, or do you need the humanities as well, the font of human wisdom?

Now you are starting at Emory, a place that asks such questions, and others. As a bioethicist, there are many such questions I spend my time asking.

Should we enhance human performance, and if so, how?
Should we try to clone extinct species?
Should we genetically engineer our food, animals, pets, our children?
Who should get scarce drugs, vaccines, human organs?

But bioethical questions are only the start. We have ethical challenges in the fractured political climate in the US, the refugee crisis, poverty, terrorism and war. We have ethical challenges concerning the destruction of our environment, the extinction of species, and climate change. And even in such things as the attack on the university, on expertise, on science and on the nature of truth itself.

(Dr. Wolpe, the Director of the Center for Ethics at Emory University, is also a member of Shearith Israel, part of Sinai and Synapses’ project Scientists in Synagogues, a grass-roots program to offer Jews opportunities to explore the most interesting and pressing questions surrounding Judaism and science. This post is adapted from a convocation speech given to new students at Emory University on August 28, 2018).

Photo by smoothgroover22

0 Comments

Add a Comment

Your email address will not be published. Required fields are marked *

%d bloggers like this: