Artificial intelligence has become an increasingly prominent topic in any conversation about technology and society. Despite the pushback against it in many fields, it has already become an essential part of using a digital device. We are right, however, to slow down and wonder why it is advancing so fast, who has a stake in it, and what we can do to correct its course in the cases where it poses a threat. Most importantly, what happens when AI leaves our computers and digital devices, and plays a role in everyday decisions that for some people could mean life or death?
Dr. Julia Stoyanovich, who is Associate Professor in the Department of Computer Science & Engineering and at the Center for Data Science, and Director of the Center for Responsible AI at New York University, has been exploring the questions of socially responsible AI and its relationship to human ethics and morality. Her publications include titles such as The Many Facets of Data Equity, Supporting Hard Queries over Probabilistic Preferences, and Taming Technical Bias in Machine Learning Pipelines. In addition to her academic work, research and scholarship, she also teaches courses on responsible data science, and the principles of database systems, with a number of works for a general audience, including an animated textbook on AI. She has also held public service positions in New York City on the responsible use of artificial intelligence, intelligence algorithms, and public policy.
(This post is part of Sinai and Synapses’ project Scientists in Synagogues, a grass-roots program to offer Jews opportunities to explore the most interesting and pressing questions surrounding Judaism and science. Dr. Stoyanovich’s talk was the opening event of Midway Jewish Center‘s series developed with Scientists in Synagogues, Living In The Tensions: Artificial Intelligence and Human Free Will: The Gifts, The Challenges, and Learning to Navigate The In-Between).Read Transcript
Joel Levenson: So Erev Tov everybody, welcome. Glad to have you all here, thank you for coming out on this cold and rainy night. But the Yankees lost yesterday, so you’re here, right? Oh, applause from the Mets fans. Nonetheless, we’re glad to have you all here with us this evening, for what is a big piece of our adult education program. This fall, Midway Jewish Center received a grant from the Sinai and Synapses initiative, and their program called Scientists in Synagogues, partially funded through the Templeton Foundation, and this is a multi-series initiative we have this fall that you’ll all hear more about later this evening. We’re glad to have you all here with us for this kickoff event. I want to thank two people in particular who helped make this evening come to fruition, and that would be Keith Shafritz and Karen Reiss Medwed.
I’m going to turn it over to Keith in just a moment to introduce our guest speaker Dr. Stoyanovich from the NYU Center for Responsible Artificial Intelligence, but first you should know that in order to put together the grant for the Scientists in Synagogues initiative, I turned to two people who know far more about science than I do. I just got a bachelor’s degree in psychology as my undergraduate degree, but Karen Reiss Medwed was ordained as a rabbi, and then went and got a doctorate of education and is now a teaching professor at Northwestern University. Keith Shafritz is a Professor of Psychology at Hofstra University. So you guys know much more about this kind of stuff than I do. Like I said, I just got my little bachelor’s degree in psychology, which is a science too, for the record. We spent a lot of time and a lot of thought into this series, in thinking about what we can bring to the community as we re-engage in person in deeper ways.
This past week in synagogue we read Parshat Beresheit, in the beginning of the Torah. At another time, we can reflect on the question of the Big Bang: what happened and how did we get here? These deep, big questions reflect on religious life, not only for us but for the whole world, and thinking about the future, and thinking about artificial intelligence, thinking about how technology will influence religious life, Jewish life, really kinds of demands some deep questions. So thank you, Keith. And thank you, Karen, for helping us to imagine the big questions that we’re going to engage in this evening. In the next part of this series, which is on a Tuesday night, November 15, we’ll have an opportunity to study some big questions, and then through a Shabbat to dinner experience on December 9th night.
But now let me turn it over to Keith, who’s going to introduce our speaker.
Keith Shafritz: Thank you, Rabbi Levenson. Thank you for encouraging us to write this particular grant. It was a pleasure working with you, and I look forward to working with you and with Karen on future events that we’re going to have with this program.
So for now, it gives me great pleasure to introduce to you all our keynote address speaker for this seminar series on artificial intelligence. And that would be Dr. Julia Stoyanovich, who comes to us this evening from New York University (NYU), where she actually holds two appointments to academic appointments, first as an institute Associate Professor in Computer Science at the School of Engineering at NYU, and then also as an associate professor of data science at the Center for Data Science at NYU.
In addition to those two academic appointments, she is also the Director for the Center for Responsible AI. Her academic work began during her doctoral career, where she earned her doctorate in computer science at Columbia University, and then went on to do a number of academic research fellowships, where her work was sponsored by the National Science Foundation. Her prior work and current work is also sponsored by corporate institutions and other private funders. She has a prolific, very prolific curriculum vitae – I’m all holding only half of it in my hand. So her vitae includes over 100 academic publications, nearly two dozen of those are the gold standard peer review publications, which are about twice what I have. So kudos to, Dr. Stoyanovich.
She has done, again, work on responsible use of AI. That has been her research area. Her publications include titles such as the many facets of data equity, supporting hard queries over probabilistic preferences, taming technical bias to machine learning pipelines, and she’ll be talking about taming some of those biases this evening.
In addition to her academic work or research and scholarship, she also teaches courses including a course on responsible on responsible data science, principles of database systems. And in addition to all that academic work, she also has a number of works for a general audience, including an animated textbook on on AI, which I think she’ll share part of with us this evening.
She has held public service positions in New York City on the responsible use of artificial intelligence, intelligence algorithms, and city public policy. And in addition to that, she has written pieces for the New York Times, The Wall Street Journal, and other academic and scholarly newspapers. So you can see this is one prolific author who we have invited to give our keynote address. So it gives me great pleasure to again turn the podium over to Dr. Stoyanovich for this evening’s keynote address.
Julia Stoyanovich: Welcome, and I’m truly humbled to be here.
And I must say that I feel right at home, and it’s really, really a pleasure to connect with the community. So I’m a Jew from the former Soviet Union, from Russia in particular, and today is actually the birthday of my grandfather, who passed away in 2007. And I consider myself a huge intellectual descendant. My grandfather was actually a programmer, a computer scientist back in the 50’s, and very few people of my generation can say that their grandfather was a programmer. He was working on the space program in the Soviet military, not by choice, but I had the privilege to be to be mentored by him as a child, and this is how I developed my love for math and for computer science subsequently. So just wanted to mention him here, and to remember him and to say his name.
So, I will speak about AI or, as some of us call it, “Al,” right. So today, I hope that we can find out who or what Al actually is.
So the materials that I will use today are based in part on a public education course that I developed together with colleagues called We Are AI: Taking Control of Technology. And we offered this course in person for the first time just this past spring at the Queens Public Library, and hope to have many more offerings now that in-person adult education is possible. And I invite all of you to take a look at this website, github.io/DataResponsibly/WeAreAI, where all course materials are available if you want to learn a little bit more about this.
Right, so what is AI? There does not exist a definition of AI that is universally accepted today. And there’s a number of reasons for this. One of the reasons is that this technology really is universal, right – it permeates every aspect of our lives, and how we define it, how we think about it, depends very much on what we use it for. So for our conversation today, I’m just gonna take this – yeah, I would like to propose that when we say AI, that we mean or refer to, a system in which algorithms use data to make decisions on our behalf, or help us humans make decisions.
And I’m illustrating a couple of AI systems here. I think that you can guess or figure out based on these pictures, where they are, right. On the top left, I’m showing a smart vacuum – a Roomba, right. So this is an AI that many of us have in our homes. It helps us do tasks that are tedious. And essentially, this is one of the first success stories of artificial intelligence. When I was a college student, studying computer science here, a vacuum, a planning system – there was an entire chapter in my AI textbook devoted to this, right. So these systems have been around, and they’re very successful.
Another system that you see here is a chess-playing AI that beat the legendary chess grand master Garry Kasparov at his game, right. And so this is one of the holy grails of AI, really, is to have a system that can learn the rules of the game by playing that game, by observing the strategy of an opponent, training itself to become competitive, to such an extent as these chess playing systems have done.
And then there are two other systems that I’m showing here. On the bottom left, an automated, an autonomous, car is depicted, right, an AI that is going to drive a car on our behalf, or maybe not, at some point in the future. And then on the bottom left, I am showing an AI-based hiring system. And I will talk about this a little bit more. I’m very interested personally in this domain of the use of technology, and in particular AI, in hiring both in how we should regulate and also how we should or should not use these systems to hire. So we’ll talk about that quite a bit.
Also, before I dive in all the way, I want to say that of course, as a professor, I have way too many slides, but it’s also not my goal to go over all of the slides that I prepared. I would love for us to have a conversation and an interactive session, right, discussing all of these topics that I hope are of interest to all of you as well. So please feel free to speak up, to interrupt me, especially if I forget to ask if there are any questions or any discussion points that you would like to raise.
So are there in fact any questions or discussion points now? Is this how you would think of an AI, or do you have another definition in mind?
So I would like to now step through this definition that I gave. I said that systems use algorithms to process data, and help us make decisions or make decisions on our behalf. So the first component that I will discuss are algorithms. So, what’s an algorithm? An algorithm is a sequence of steps that transforms the input into the output. And I like to explain this concept using a very simple, intuitive algorithm that we all know how to use at home. And that is a recipe for baking bread.
So what’s my algorithm for baking bread? There are five steps here that I am listing on this slide. The first is “prep”: I buy ingredients and measure them. The second is “mix”: I combine yeast, flour and water. The third is “cover”: wait for the dough to rise. Then knead, shape, wait some more, repeat as needed. And then, finally, I will bake my loaf. And of course, this is a schematic recipe, right? I told it to you very, very quickly. But essentially there is a sequence of steps that you follow. And this a simple example of an algorithm. Ingredients come in, and a loaf of bread comes out.
So the algorithm that I just showed you is a rule-based algorithm. Such algorithms are going to follow the recipe directly, very prescriptively. I will tell my algorithm exactly what ingredients to get, how much to get of which ingredient, how to mix them in what proportion, and also at what temperature to bake, and how long to wait for the loaf of bread to bake, and when to take it out.
So these algorithms are, like I said, rule-based. I know the rules ahead of time, and they’re specified before my algorithm ever runs. And if I know the rules for baking a good loaf of bread, then I can specify them and I can expect to always get the same, or almost the same, result.
But sometimes, we want more variety in the types of bread that come out. Or other times, we may not even know what the exact recipe is, what are the steps to follow. And in that case, we are going to ask our algorithms to learn how to bake bread – so these are learning algorithms based on trying multiple times – and then asking people in your opinion, did this loaf of bread come out well or not? Should they try to adjust something? What do you think I should adjust?
So these learning algorithms are going to be using our, people’s, everyday intelligence to start artificially, based on data, learning the process that we need to undertake to transform the ingredients into a loaf. So, an algorithm I discussed already, right. Now, what about data? Data is important to both kinds of algorithms, both rule-based ones and those that learn based on data. And it comes in multiple forms. The first form of data is the input, the ingredients that my algorithm is going to take in, that it will consume. The second type of data are the parameters of the algorithm – how much to take of which of each ingredient, for example, at what temperature to bake, and how long to wait for the bread to be baked.
And then there is some data that describes the output. These are going to be the objectively measurable properties of the flow that I have prepared. What is its weight? What is its nutritional value? Can I somehow quantify the degree to which it’s chewy on the inside and crusty on the outside? These are objectively measurable factors.
And then the fourth kind of data is human judgment. That, again, is going to apply to the final result. Human judgment is the type of data that is perhaps the most important here. It’s more important, very often, then the objectively measurable factors about the outcome. It will be something like, does it taste good? Does it look Instagram worthy? Should they give it a thumbs up? Or thumbs down?
We spoke about algorithms yesterday. So you are speaking about human judgment, now, right? Will computers be making those judgments? Is that the question? This is where I’m going, actually.
So here is the example that I am using, is when a human is baking bread, right. So I am not a machine, I am a person, I am executing my algorithm whether I like the result. But the exact same thing is going to apply to a situation where machines are baking. And I will talk about this in a moment. But the main point here is that some things are objectively measurable, while others are not. Do we trust machines – for example, to be expressing judgment on things that are not objectively measurable? I don’t think it can be achieved – I’ll tell you right away. And even if it’s good, I don’t think that we should be asking machines to do this, right. So this is going to be, and then again, you’re already kind of seeing where this is going, right?
This is going to be the magic of AI that we will unpack together today, that not everything is objective. Lots of things should be left to humans to decide, such as whether my loaf is good enough, right. It depends for whom.
Computational thinking is one of these really popular methodologies or frameworks, right, that people have been speaking about. To explain, to support us […] to learn to think like machines, to say, right – what I’m talking about here is much more general-purpose. This is your grandmother baking bread, right? Has she ever been taught computational thinking? No, but still all of us understand what an algorithm is. What’s another very simple algorithm? When you get dressed in the morning, you know that you first need to put on your socks and then your shoes, right, not in the opposite order. So this is a rule that we all know intuitively.
Computers do very often make decisions on our behalf. And for us to take control of this technology that is AI, or computers, or whatever we want to call it – machines, we need to agree individually and collectively which types of decisions we’re comfortable leaving up to them. And where do we want to intervene, right, and how can we intervene?
So really, our underlying kind of main idea here is that we need to learn about technology so that we can control it together, so that we can understand where the harms may be or where we lose agency, and how do we control this technology.
But to do this, we need to understand what it is. We need to take the magic out of AI, right. So the connection that I’m wanting to make here is that artificial intelligence sounds amazing, right? What’s not to like artificial intelligence? But it’s really not that different from the everyday intelligence that we all have. It just generalizes from the everyday intelligence of lots and lots and lots of people, some of them you know, others you don’t know. But ultimately, it’s down to us people, right, to shape it and then to decide what happens with this.
Okay, so algorithms, I think we understand, right? Just a sequence of steps, either it’s prescribed fully or the machine somehow figures out what steps to take depending on what type of bread we like. Different types of data – some of it comes as input, some of it transforms or changes the way that the algorithm executes, and others characterize the output, subjectively or objectively.
And then the most interesting piece, and we already started thinking about this and hinting at this, is the decisions, right. So what are the kinds of decisions that our AI is making? In the very first example that I gave here, depicted on the top left, we are baking bread, and we’re asking a human: Do you like the result? Do you like what came out? Are you going to give it a thumbs up? Or thumbs down?
And now we want to start using machines, right, maybe a smart oven, to help us bake. And the question becomes, are we going to give enough data, enough of our experience together, with expressions of, “I like what came out” or “I did not like what came out” to a machine, and then ask that machine to bake on our behalf?
Will we trust the machine to bake something different? For example, I only was baking sourdough before, and now I want this machine to bake baguettes. Do I think that it will do well? Would I trust this machine to also make judgments on my behalf, to decide whether a loaf that it just baked is good enough or not, and then to continue teaching itself? So in other words, what types of decisions are we comfortable leaving up to a machine, to an algorithm, to an AI? This is the biggest question of all here. And this is the question that connects algorithms, data and the overall system of operation.
So returning to my definition here, I already discussed a couple of examples, right, at the start. There is this Roomba – the smart vacuum. And Roomba does planning. It’s a rule-based system. So these rules are given ahead of time to the algorithm, and then it finds itself in a room that it maybe hasn’t seen before, and it needs to decide, using these rules, how to cover the room strategically. Should it go left or right or turn around? What’s a good strategy to take?
The other example we discussed was the chess-playing AI, right. And here also we were talking about a strategy. Part of it may be given in the form of rules at design time, but part of it the system will learn by observing how an opponent plays, and then formulating what a good winning strategy may be. So there is a similarity between the chess playing AI and the Roomba, in that both of these systems are acting strategically.
Now this system – an algorithmic hiring system, right? This is something that we already started to think about, and that I will return to, that has a very different task than a Roomba, the vacuum or a chess playing AI. The task that such a system gets is to try and predict, based on a person’s resume or on a conversation that you have with them during their job interview – whether they will, in fact, do well on the job should you hire them, right. So what we’re asking this system to do is to predict future behavior of a human, and ultimately to predict social outcomes.
And the question here is, first of all, whether or not social outcomes are even possible to predict, and if they are possible to predict, should we be asking machines to make such predictions? Is this ethical, or is this something where we give up agency as people, and is it worthwhile to give up agency in this country?
What do people think about algorithmic hiring? Have you been exposed to such systems? Do you know whether you’ve been exposed to them? That’s a big question, right? There’s actually very little disclosure at the moment. There are lots of companies like this. So they use AI for this right? They process data.
I’m skeptical, actually, about the system being able to encode our values very precisely. What if the person doesn’t themselves know what they’re looking for? What did they just tell you? “I want somebody who will do really well on the job, and I can’t quite pinpoint it. It’s maybe similar to how Sue does her job, right, but I can’t quite know what it is.”
We will talk about bias. Bias is a very difficult term to define also, right, and bias is unavoidable, ultimately. Bias is not always negative either, right? I mean, there’s statistical bias that tells us how to differentiate the red squares from the green squares, because the red squares usually appear on the right side of the fence or something, right.
So this continuum from bias that is essentially signaling the data, to bias that is harmful somehow, because it reflects something that has been wrong with the world and now reflects itself in the data – this is very difficult to navigate, for people and for machines. And we will discuss this as well please.
So AI, perhaps, can be asked to narrow down the pool of candidates based on some characteristics that can result in their resumes, like a bachelor’s degree and the following grade point average. But it would be very difficult for an AI to actually foresee, maybe impossible, how somebody would perform if they were hired by a company. Because – and I will add – it doesn’t only depend on what is on your resume. It also depends on whether you feel welcomed on the team, for example, right? It depends on the entire environment at the company, the cultural fit, and that also is a kind of a difficult term to live with a little bit, because usually, companies use it to say, “Please hire for me people who are exactly like the people that I’ve hired before.” And these are going to be white men with Ivy League degrees, usually, because that’s the overrepresented kind of demographic in the workforce, right. So when we say “cultural fit,” we are a little bit worried about what that signals. But we can take that as a broader concept as well, right? Are your values aligned with the values of the organization? That’s not necessarily something a resume will say.
A resume is not an objective document. Maybe we can collect a lot of other data to try and relate it with whatever there is on the resume, to then somehow get closer to the truth, right. The thing is, sometimes the truth is not even something we can find, right. For some questions like “What is the birthdate of Albert Einstein, or of my grandfather?”, there is an objective answer. Some questions don’t have objective answers. So this is part of the difficulty here. It’s a limitation of this, right? The data only knows what it knows. And it doesn’t know which part of what it knows is correct, and which isn’t. And it doesn’t know what’s knowable, right? We all are limited to the system within which we operate.
And this actually is going to be a segue to portion into which I will now jump, about whose responsibility is it to make sure that our machines work, and that they work to our benefit individually and societally.
So what we will start with here, actually, is a positive example. Because otherwise, if it’s all gloom and doom, right, we should just throw out the AI and not use it ever. But this is not the message that I want to send. I am an engineer, I am a technologist, right. I want to build systems that are good for us, individually and collectively. And such systems can be built, right. So our job, or my job as a technologist, is to understand when and how to build such systems, and when and how to use them.
So let me give you a positive example here that concerns the domain of medical imaging, where the use of AI is all the rage, like in many other domains of science, and the practice. And the example that I’m giving here is of a system called fastMRI, Fast Magnetic Resonance Imaging, that was developed by researchers at NYU Langone Health and Facebook AI. And I like to bring up Facebook here, because Facebook is a platform. They do lots of things that are at odds with our legal regime, with our ethics, but they are not a monolithic entity, right. They do a lot of things that are good also. And this is one example of a project that has been very beneficial that came out of Facebook.
So what is fastMRI? These AI systems take as input some portion of a person’s MRI scan, something that takes about 10 times faster to collect than a full MRI. And they extrapolate from that, and they create a semi-synthetic scan that has been shown to be diagnostically interchangeable, just as good as a regular traditional MRI scan would be, for a range of tasks.
And this is really spectacular, because it allows somebody who for example, is claustrophobic, to spend a lot less time inside an MRI machine. And it can make the difference between getting an MRI and not getting one. It also helps in situations where MRI machines are in short supply, right. You can serve 10 times more people in this region. And so this is a very, very important breakthrough, right, that people have achieved with the help of data collection and data curation. So checking the data controlling it, right, making sure it’s clean and reliable with the help of advanced algorithmic techniques.
And another example that is kind of in this realm as well is where there are these machines that have been developed, these algorithms, right, that can diagnose particular types of cancers, lung and breast cancers, in particular, just as well, based on the images as a highly skilled or the most highly skilled pathologist would be. And the most wonderful thing here is that when you have a team of human pathologists and AI, and they outperform both humans alone and AI alone in the accuracy of their diagnosis.
So again, these are both tremendous examples, right, where the use of AI can help. What are some of the positive factors that we can glean from these examples? The first is that there is a clear need to improve on the status quo. We need to be able to diagnose diseases better, faster, with less data cheaper. And so having this technology at our disposal and not using it is actually unethical. Right? Because you are then not helping somebody that you could be helping.
An important factor here, a second important factor, is that we can validate predictions. A person does or does not have a particular type of cancer, and if you spend enough time and enough resources, right, enough attention from a human pathologist and a group of human pathologists, you can actually understand what the ground truth, what should be the correct diagnosis. And then you can compare what your AI gave you as a diagnosis with that ground truth, so you can validate predictions.
The third positive factor is technical readiness, as I call it. That means that we have sufficient data. We have sophisticated algorithms, and we have sufficient power to compute, to be able to develop this technology. This is not a pie-in-the-sky kind of a thing. We can do this, given the state of the art in data collection and in technical developments.
Last but not least, we have decision-maker readiness. This goes to the question about who should take responsibility, right. So in these teams where a human pathologist is taking a suggestion from an AI under advisement, they still understand the responsibility, ultimately, for the diagnosis is with them. Right? They are trained in medical ethics. They have been trained in predictive analytics, in data analysis, to a sufficient extent to understand when to take the prediction from an AI and when to question it, when to challenge it and when to override, right. And of course, this person has very deep-domain expertise, right, so they know what they’re talking about. They can challenge the machine. And they understand that the responsibility is theirs.
Now, from here, we are going to look at a couple of examples where things actually go wrong, to try and see what went wrong, exactly, and what we can do to control the possible harvest.
So here’s the first example of things going wrong that I’m getting is harmless. Machines are trying to predict the future very often. This is what we’re asking them to do, like to predict whether somebody will do well on the job, right. And because the future is difficult to predict, and sometimes even impossible. “Prediction is difficult, especially of the future. You all know that phrase, right? Actually, I usually poll people “Who said this – do you know?” Depending on the audience, you can get different answers.
Yogi Berra is one, I usually hear this in the US. Another one is Mark Twain, is the other option. And the third is Niels Bohr, the physicist, Nobel Prize winner. Yep. So, presumably all three of them said it, I don’t know who said it first. (laughs)
So machines will make mistakes, because prediction of the future is very difficult. And so the first example here is that you have a customer service AI at your favorite online shoe store, and it somehow misunderstands your order. It mis-predicts what pair of shoes you wanted. And then it ships the wrong pair of shoes to you. Annoying as this may be, the consequences of such a mistake are not severe, and they are reversible. You return the shoes, you get the refund.
There are however, cases at the other end of this spectrum where mistakes can lead to catastrophic, irreversible harms, even to the loss of human life. And for this we will consider an autonomous car. And this is an example that you brought up, right.
So this is an AI that sits inside my vehicle, and it’s going to direct my vehicle to either stop or cross an intersection. And suppose that now it spots a person riding a bicycle, but it does not recognize them as one of the types of objects that it would expect to see on the street, because maybe the data on which it was trained did not contain many people riding bicycles. It can recognize trees, it can recognize cars, but not the cyclists. And then it might direct the car to just run the person over. And this is not a hypothetical example. Something very similar actually happened where a person died being hit by an autonomous car.
And this car is irreversible, right? But the person is dead. And now we need to think about who takes responsibility, and how to prevent such mistakes from happening.
This is another example where a person in a wheelchair maybe won’t be recognized as one types of the objects that an autonomous system expects to see on the road.
And here, the example that I’m giving is from a really, really scary domain. And this is lethal autonomous weapons. These are autonomous systems that are AI-controlled, and whether we like it or not, they are already being deployed in combat situations. And there is a big push on the part of the United Nations to start overseeing, to start regulating, the space, to come up with some international treaties that would allow us to control the proliferation of these systems. And one of the issues with combat situations is that for an autonomous weapon to “work,” it needs to be able to tell apart very well who is a combatant and who is a civilian. And if you see a person in a wheelchair, it’s very likely that these systems will mistake them for a combatant, because they look motorized, right. So this is actually one of the concerns that arises in this very, very scary domain, that we should not have with autonomous weapons, but they do exist. And so how do we deal with that situation?
This is a point of view that many people raise, right, and it’s a really good point of view to use in our discussion. So the reason that I think that we should hold machines to a higher standard is the sheer scale of damage that they can produce, right? So if there is a systematic mistake, in the way that a lethal autonomous weapon operates, this is one of the reasons, then it’s going to impact many, many, many more people, many, many more pedestrians being hit, before we detect that there is such an issue, than a single stoned driver. So this is one point, right?
Another point – and this is actually where lethal autonomous weapons are a good example, unfortunately – is that if a mistake is made, and a human makes that mistake, you as a human, I think, can have a sense of how to grieve, whom to blame, right, who to put in jail for that mistake. But if a machine makes a mistake, how do you grieve?
I mean, this to me was always personally the biggest issue. We can discuss this further. I do agree with you that there certainly are situations where the use of automation is beneficial. I think that our job is not to dismiss autonomous cars because sometimes they would hit the pedestrian. Rather, it’s to figure out a regime where we can negotiate these trade-offs, where we can decide under what conditions it’s safe to use these cars, and who takes responsibility for their mistakes. We can’t just kind of unleash them and then say “Oh, because fewer people are killed we are okay.” Because, how much fewer? Is it 2% fewer? Is it, you know, none at all? But these are very, very important. questions to ponder.
And this is what they’re worried about more, is they want to prevent civilian casualties, right. But the issue that they’re facing in this particular example is that a civilian in a wheelchair looks like a combatant, because they are motorized. That’s just the issue here. And this is essentially about the data that these machines have been trained on, that they learned from, having omissions of people in wheelchairs who are civilians, not combatants. So this, in itself, is one of the easier issues to fix, in fact. We just give it more data. We hope that somebody somewhere is watching, then it wouldn’t be an autonomous weapon, right.
So the main trend in oversight of AI, autonomous or not, lethal or not, is to realize that we all have to be responsible for taking control of these systems, that it cannot be just one person or one stakeholder group, that we need all the tools that are at our disposal, quality control, and laws and regulation, and you know, ethical norms. And most importantly, we need people to understand, you and me, and regulators and people making them, that the responsibility is on everybody.
And so the main thing that we can do is exactly what you are doing today – is sitting here today and thinking about these issues, right? Teaching ourselves to think about them, to deliberate about the appropriate place for technology. There isn’t going to be an automatic way to control the harms that are due to automatic and autonomous systems. It’s always going to involve a human. And this is why I brought up that MRI example, is because there was a really meaningful human-AI cooperation there, right, where the human ultimately takes responsibility. This is where we want to be.
Right, so in hiring, how does that fit in here? Harms are not catastrophic. You don’t get selected for a job interview based on your resume, no one dies, right? This is not as bad as, you know, getting hit by an autonomous car. But the issue here, and this is back to bias, is that these harms, they can be systematic, and they can be cumulative. It’s always the same types of people that don’t get selected. And this amplifies some of the really ugly and undesirable effects that we have in our society today, in access to work, to credits, to housing, to education, etc.
So we will talk about this a little bit more, but let me talk about the trolley problems, right, because this is where we started. So what is the trolley problem? This is a thought experiment that illustrates a particular way in which you can start thinking about how to embed values and beliefs into the design of technical systems.
So what is this problem? We have a trolley car that is moving at a relatively high speed, let’s say, and then there is a person here who can control whether the trolley continues on its path, thus running over five people on this side of the tracks, or that person can redirect the trolley car to then go to the other track, and only kill one person, right.
So the question is, how should we be controlling such a trolley car, right? What is the right way to decide, whether to let the trolley go and continue on its path, or to redirect it? And it turns out that when you ask people about what they would prefer, they don’t always tell you that they should move the trolley car to run over the one person and spare the five that are on the other side of the tracks. What people will tell you depends very much on the context in which you’re asking them. It depends on their specific personalities, on how you pose this problem – for example, is it somebody they know on this side of the tracks? Is it them on that side of the tracks, versus five people that they don’t know? The decision, ultimately, is going to be based on context, on values, on beliefs. and is not something that is universal. There is no universally accepted answer, even for something as simple as this.
The trolley problem has been criticized as being so outrageous as to be unrealistic, right? This is just a thought experiment. Nobody’s actually going to go and, you know, run the trolley and then ask you whether you would redirect or not. There is, for example, also a difference that has to do with whether you need to actively redirect, or if you just allow the trolley to continue. Even with that choice, people will give you different answers. But even if the trolley problem as phrased here is unrealistic, we are in fact seeing this problem occur and be applicable directly to situations like autonomous vehicles, right?
So one simplification of the trolley problem was that there is just the one trolley and the tracks, and you get to decide whether to let it continue or to redirect it. But in reality, again, thinking about autonomous vehicles, you have lots of cars on the road, lots of pedestrians, there are different road conditions, right. So how do you reason about the interactions between all of these trolley problems, in order to embed your notion of what is a more or less desirable outcome here?
Another issue is that we have to, where autonomous vehicles are deployed, deal with a high degree of uncertainty. In the original formulation, we knew that there was one person on the left side of the tracks and five people on the right. What if you don’t know how many people are on what side of the tracks?
What if you don’t know even if there are people, right? There is a lot of uncertainty that these machines face. How do you reason in the face of this uncertainty?
The trolley problem illustrates a particular doctrine of moral philosophy known as utilitarianism. And here, I am describing this doctrine by giving a quote from Jeremy Bentham, who is one of the fathers of utilitarianism: “It is the greatest happiness of the greatest number that is the measure of right and wrong.” So Jeremy Bentham, if he knew how many people were on which side of the tracks, he would have redirected the trolley car to run over fewer people, so that more people are alive and hopefully thrive, right?
So this sounds great in theory, but utilitarianism opens a can of worms in our setup, and that can of worms has a name. Its name is algorithmic morality. So what is algorithmic morality? It’s the act of attributing moral reasoning to algorithmic systems. And this is problematic for a couple of reasons. One of them is that it will require us to be able to come up with a formula of what’s good and what’s bad, and encode that formula, encode our values and our beliefs, in an algorithm. But there is, unfortunately, no way to measure happiness or unhappiness. There’s simply no formula for values, right. So this becomes an impossible task.
And the second issue is that if an algorithm makes a mistake, a car ends up running over a person, or more people than it should have, then that algorithm has to be held accountable for the mistake. And that makes no sense, right, because an algorithm does not make a deliberate choice that leads to this mistake and to the particular harm.
It does not have agency, so we cannot hold it accountable in any meaningful sense of the word.
And this is the issue, the biggest issue with literally using utilitarianism and attempting to embed it into the operation of algorithmic systems. Who decides what right and wrong is, and also whose right and whose wrong. do we prioritize over someone else’s right and wrong?
If it can stop, and you knew that this behavior is possible of your system, then you should not allow such a system to operate autonomously. That’s the answer, right. So the answer of how to do responsible technology does not start with “I have a piece of technology built, how do I operate that responsibly?” It starts with, “Should I even build this technology? What am I expecting it to do? How will it help me? How do I measure whether in fact it’s helpful?”, right?
And we need to be prepared to say that no, this should not be deployed, because it’s utter nonsense ,or it’s harmful, or if someone dies, it’s a million people, and no one is responsible but an algorithm. And that’s unacceptable.
I believe that autonomous cars will become a reality when all cars on the road are autonomous, when there are no people and cars on the road. So in one of the failure modes here, the example where a person on a bicycle was killed, there was also a driver in the car. But they were told, essentially, not to pay attention. At the same time, they were expected to somehow stop the car if something goes wrong. Of course they weren’t paying attention, right? So you cannot tell the person both that it’s their responsibility and that it’s not, right.
So if we have really advanced communication systems between the cars, point-to-point and coordinated through the road as well, so that we can really prevent things like this from happening, and all the cars are autonomous (there is not a mix of autonomy and human error), then I think that we can have autonomous cars running.
But the way that we’re testing and deploying them now, I just don’t see how that’s going to work. Not because of technology, but because it’s not clear whose responsibility it is when someone dies, right? And so here we go into the situation where, really, this conversation is about who has the power to decide in society. Who is interested in deploying these systems, and who speaks loudest?
I can give you an example from my own experience right now. We are trying in New York City to regulate the use of AI in hiring. This is why keep bringing up these examples. This is a domain that I’ve been living and breathing for years at this point, three or four years. New York is trying to regulate the use of these tools in New York, by New York employers, impacting New York City residents and employees, right, so very limited scope.
The only stakeholders, the only groups or individuals that you hear at public hearings – about this law, about how it should be enacted, what the rules should be – are companies that produce and sell algorithmic hiring systems like resume screeners, and companies that want to charge money by running bias audits of these tools, right? A bias audit is already going to be part of this law, and there is a commercial opportunity there.
You hear sometimes also from employers who tell you, “Oh, it’s already really hard to find people to hire. So why don’t you just let us do our thing, because all of this is going to slow us down?” You don’t hear from job seekers at all. They are not at all represented in this conversation, although this law is supposed to protect them first and foremost. So this is a really good example of how these types of things are negotiated today, and it’s just not right. No matter what side of the political spectrum you are, it’s not right. You need to have a conversation with all of those impacted.
I actually think that when we think about the use of AI today, it allows us to understand ways in which we should be fixing our society generally, because whether or not we use AI in a particular process ultimately is immaterial. Right? It could be a really dumb AI, like a very simple rule. No learning from data, nothing like that. So that would be like a very simple brake system in your car that fails for some mechanical reason. Who is responsible for people dying, whether it’s an AI-based brake system or a mechanical brake system, the answer should be the same, right? To some extent, to a large extent, it’s the company that produced the faulty system, if this is a systematic thing, right? But of course, you, operating the vehicle, will certainly feel bad, right? Whether this is your responsibility truly is another question.
But really, technology or not, decision–making processes, even if they’re done with pen and paper, and they are unfolding in a particular way – they don’t work, or they’re biased against a particular population, or they are completely arbitrary, right. Whether or not AI is part of the picture is immaterial, I think. So this is our opportunity to debug society.
So this is my algorithmic morality, and maybe I will actually end on this, that where we are today, and what the idea behind making this embedding of values and beliefs into AI systems just automatic – that idea takes us into the world that is shown on the left here.
And this is a world where we are controlled by technology. Instead the world that we should live in – is this one, right where we are the controllers.
And I will just end on a quote here. We didn’t get to talk about bias, unfortunately.
But you can read about this in the comics, right.
So, just a couple of punch-line slides here, right. So the first one is that we need to think in a nuanced way about the role of technology in society. There are two dangerous extremes that we have to understand and avoid. The one depicted on the left is techno- optimism. It’s a belief that deep-seated societal problems, like structural discrimination in hiring, will be solved like this by an AI system or by the use of technology. This is unrealistic; it cannot happen.
The other harmful belief is that any attempt to embed values and beliefs into technology, and to have productive uses of technology in society, is essentially bound to fail, right, and so we should just dismiss the use of technology outright. This is not right either – it’s techno-bashing. So we need to converge to a nuanced understanding that is going to be somewhere around the nose of this of this person here, right, where we understand the proper place for tech, [while] also thinking about how we as people can control it. We all are responsible for making sure that technology is used in our society in the way that it serves us, right.
And here I’m showing technology leaders, I’m showing platforms, I’m showing scientists and just members of the public. We all have a role to play in creating these distributed accountability structures for the design, operation and oversight of tech. And our goal should be to root technology in people. And this, for somebody like me, an engineer, is actually a very scary thought. Because when I went to college and to grad school, what was I taught? There is an algorithm. This algorithm is faster than this other one. Data is true. The result is either correct or incorrect. Or when we think about these social settings for tech, we have to think about values and beliefs and stakeholders and benefits and harms, right? And this is very, very difficult. It’s a completely different methodological toolkit. Engineers are reductionist. We like to decompose systems into small boxes, with as few wires connecting the boxes as possible, right? Society, you cannot deconstruct them that way. It’s a kind of a holistic structure. So how do you negotiate between the holistic and the reductionist? Again, very scary for all of us in tech. But if we succeed in this even to some extent, it makes it easier for us to explain to our children what we do and why it matters, and therefore it’s worthwhile.
I’ll end on this. This is a recent book that a dear friend of mine Serge Abiteboul, wrote together with his colleague, Gilles Dowek – Serge is a computer scientist who has been thinking about ethics in technology for a long time. So this is a quote from their book, The Age of Algorithms:
“Creations of the human spirit, algorithms and AI” – and I added an “AI” – “are what we make them. And they will be what we want them to be: It’s up to us to choose the world we wants to live in.” Thank you very much. And these are comics. And this is my amazing colleague and now my PhD student, Falaah Arif Khan, who is the talented artist who made all these images.