In the last couple of years, advances in artificial intelligence have made another great jump forward. Particularly as ChatGPT and visualizers like Craiyon and Midjourney have personified AI and have allowed anyone in the world to play with it, there has been a fresh wave of excitement, but also fear. AI has immense potential to help humans overcome the most tedious and painful aspects in life, but we are alternately enchanted, creeped out and worried about the newest models’ grasp of that elusively human thing, creativity. As AI generates uncannily human-like artwork, video, poetry and even sermons, even while appropriating human-generated content, we wonder: will humans be replaced by their own creation?
Beyond the story of the Golem, Judaism has a surprising amount to say about how we might regard AI, and how we can set boundaries for it – both for AI itself as well as for the more craven impulses of the humans who may use it.
Read Transcript
Judy Spencer: Welcome, everyone, to another evening of science and religion here at Temple Adas Israel – our ongoing series of dialogues on this subject. And we have to, of course, say a big “thank you” to Dr. Steve Rosen and Celia Paul for helping to make this program a possibility. It’s really wonderful when people in the congregation really step up and make something happen. So tonight’s topic is AI. And this has probably been in the news and been on everybody’s mind. I think there was an article in the Times just this morning. But the reality is that, you know, I’m not sure I know exactly what it is. I mean, am I thinking R2D2, or is it HAL 9000 from A Space Odyssey? I mean, is this something that’s going to make our lives better, or something maybe not so beneficial? So we’re very, very lucky, because we have today to help us understand AI and figure out how we’re going to fit into its life and how it’s going to fit into our life – we have Josh Gruenstein, who’s going to come up and join us in a second. I’m going to tell you a little about him. So Josh is the co-founder and CEO of Tutor Intelligence, an MIT spin-off developing artificial intelligence for factory robots. So I’m going to make him tell us a little bit about that, because it is very fascinating. So prior to starting this business, Josh was a graduate researcher and lecturer at MIT, where he received his Bachelor’s degree in science and a Master’s in Engineering and Artificial Intelligence. So we are in really good shape. We are going to learn some very fascinating things today. Okay, so I think, first thing we want to do is, tell us about: what really is AI? I mean, what, when people say “AI,” what exactly are they talking about? Josh Gruenstein: Yeah, so it’s meant a lot of different things over the past – I don’t know – 60 years or I guess 70 years now. People have started working on AI basically as long as there are computers, which is after World War II, but most recently starting in about 2012, it has meant, basically, these things called artificial neural networks that are trained over big sources of data. And to explain what that means, is really anybody here familiar with, like, basic statistical models? You know, the idea is like, okay, I opened a bag of M&Ms, I drop out five M&M’s, I see there are three green M&Ms, and I can sort of reasonably infer that in the rest of the bag there are 20 M&M’s. Every three out of five is going to be a green M&M. This is basically how AI works today, but at a really, really really big scale, where instead of M&M’s you have all of the internet, and instead of counting them by hand, you have these very fancy algorithms, which are able to detect patterns across lots – gigabytes, terabytes, petabytes – of data. And very very very recently, over the past two to three years, this has enabled people to create really amazing sort of creations using AI. In text – basically, people have figured out how to use this predictive power, statistical power, to predict what the next word or letter would be in a sequence of text. And this allows AI to almost speak language. You can converse with an AI, you can say “hello, how are you doing?” And it knows, based on all of the patterns and experience and all the data it’s looked at, to be able to respond and say, “Oh, I’m doing well, it’s sunny today. I’m having a great day.” And then in other areas, right, in images, the same sort of pattern recognition and analysis lets an AI look at – oh, you know – there are all these images on the internet, where if you type in the word “synagogue,” that’s associated with stained glass windows and columns and pews and stuff like that. So an AI might be able to detect that pattern, and when you ask it “Hey, give me a picture of a synagogue,” it actually knows what that looks like. It’s distilled that pattern, it’s understood that. And that’s, I think, the thing that has gotten people pretty scared and excited, because in these last two years AI has gotten to this point where the behavior is scary. But it’s really over the past 10 years that AI has started to become part of our daily lives, whether you want it or not. I assume everybody here has a cell phone. If you don’t, you probably have a computer. And even if you don’t, you’re interacting with people and organizations who do have these things. Most applications and systems from major companies today use this technology in most features, usually in a way that you’re not so actively aware of like you would be if you were to talk to your phone with something like Siri, or something even more advanced. Okay, so that’s a brief introduction to what we’re talking about here. Judy Spencer: So when we’re talking with Siri, we’re interacting with AI on a daily basis. When we’re doing a search, we’re probably interacting with AI. And that’s why – is that why, let’s say, I’m looking at a cat toy for my daughter’s cat. All of a sudden, you know, everything’s coming up cats in my phone. That’s just – I’m sorry, I don’t have a cat. But you know, I like cats. Josh Gruenstein: Exactly. That’s pattern recognition based off of – Google gets to watch you across the whole internet. So they get to get a lot of data about you and SEO, you know, you on this website looked up something about a cat, or you Googled something, and they can use this to predict, “Okay, people who like cats maybe want to buy” – I don’t know – “litter boxes,” and they’re going to make more money selling litter boxes if they do that. So yeah, that’s a great example of how it’s sort of everywhere. You’re not really realizing it. Judy Spencer: So would you say it’s sort of morally neutral in a sense? It depends upon how we’re using it, right? Josh Gruenstein: You could definitely say that. I think it’s morally neutral – that’s an interesting framing. It is certainly a tool, right – this is not a living thing that is existing, that you can go and interact with – and certainly you can interact with in different ways and get different results. The way in which I’d say it’s not morally neutral is sort of this approach, at its foundations, of looking at lots of data about the world – all of the text that anybody has ever written, all the images everybody’s ever taken – [it] turns out if you distill all of that, you end up with sort of the same biases that people have, and that sort of manifest in different and interesting ways. So in that sense, it’s very much not neutral. In most of these AI systems, you can sort of see the shadows of all of these finicky rough edges that we people have in our behavior. Judy Spencer: So Dan, I think I was talking about this with my husband at home. How do the rabbis look at something that there’s no guidance for in the Talmud and the Torah – something that, you know, for me in childhood, this was science fiction? It was Rosie the robot housekeeper on The Jetsons. And now we’re living sort of, like – in that future. Not quite, exactly, but you can get the robot vacuum, you know. So what do the rabbis do? Where do you find guidance? Dan Geffen: Look, not just robot vacuum. On 114, on the way to dropping my daughter off at camp, there’s a robot lawn mower that just goes back and forth happily every day, just doing its thing. So it’s an interesting question, because you’re asking, obviously, a pointed one, but a general one. Because in the end, of course, the rabbis who are writing, for example, were talking about the Mishnah Gemara, you’re talking about second century CE. The world of the second century CE was full of all sorts of things that did not exist in the world when, for example, The Exodus takes place. And there certainly are a whole lot of situations that come up 2000 years later, more or less, which we’re living in, which the rabbis could not possibly have conceived of. The idea of the computer, let alone what this is what we’re all doing right now – inconceivable to the rabbis of the Talmud. So how do you then deal with [this], right? How does it all work? How can 2000 years go by and all of these new technological things come along, and how does it not just fall apart? It’s because the rabbis tend to think more categorically than specifically in most cases. They try to say to themselves, in a situation in which something comes into existence that has no precedent, you look and compare and try to find elements of that thing that do have precedent, that compared to something that does have an example in previous iterations. So in the case of the tool, right, because this is really actually a great example of it – you know, a hammer is a tool. I use this analogy all the time. A hammer is a tool, it can build a house, and it can take a life. How it is used is really the ethical consideration, right. The entity that is created, at least at this point, unless I’m misunderstanding it, is not a sentient being in a way in which it can then determine its own fate. It’s not going to walk off and become the Terminator tomorrow. It’s still a creation, at least for the time being, in which there is a certain amount of control. What happens beyond that is a whole different story. But for example – and I had to ask Shelly this earlier, because it’s a word I hadn’t come across in Hebrew before, which is what is the word for “artificial” – so the word for “artificial,” of course, doesn’t exist in the Torah. Like a lot of words in modern Hebrew, they go back to the Torah to try to figure out how can we connect to it. So the word itself in Hebrew is “malachti,” unless I’m mistaken, and unless maybe David is probably the only person in the room who might realize its other connected piece – the word “melacha” is one of the words for “creative work.” And it’s the main thing the rabbis are concerned about when it comes to, for example, Shabbat. And what’s prohibited to do on Shabbat is to do basically creative acts – creating new kinds of things. So they put certain kinds of limitations on [that], but at the same time, it’s like the word “artificial” in English. Its origin, in Latin, if I’m not mistaken, is the same one that is actually for the word “art,” and the same one that ultimately gives us the word “artifice.” All of these things are sort of related in a concept, which is: when you have something that is not God-made, how do you respond to it? Is it bad or is it good? And the Torah would probably say, of something of this nature, “bad,” right. If I were just going to say, I would assume bad. Because it, in essence, is so far beyond what the average Israelite living in the desert could have conceived of. It would have been closer to comparing to God, basically, right, what the AI of today can do – just predicting the weather of the next day would have seemed god-like to them. So in the end what are the rabbis, as I said, sort of concerned with when it comes to “is it right, is it wrong” – it’s hard to know, exactly, but the one thing I can tell you with absolute certainty is if the rabbis get involved in this conversation at all, it’s going to be to create certain boundaries, certain protective elements that prevent us from, example, using the hammer to take a life, as opposed to using the hammer to create something positive. So it’s a great, wonderful opening question, which I think we’ll circle back around to as we continue to talk. But it’s given me a lot to think about, and I think, actually, it’s given most Jewish theologians, philosophers and rabbis a tremendous amount to think about, because in the end, the progression from this point on is really the question, right. Wherever AI is today is its infancy. Where is it a hundred years from now, you know? There are going to be some real ethical considerations, for sure. Josh Gruenstein: It’s interesting. You mentioned the rabbis would go “bad,” right. And I [think] that’s absolutely right, and I think there actually is precedent for this, right. There’s the story of the Golem, which is, you know, the rabbis would, in some parable, crafted a man of mud, and they did some inscription and gave it consciousness – bad outcome, not what is intended is – “don’t do this,” is the message. But yeah, it’s definitely hairy questions there. Audience member: We’ll find out. I guess we should probably pose the question to the AI and see what they think. That would be the next thing to do. Josh Gruenstein: Yeah. I also found it interesting – sort of, it sounds like there’s sort of a critical question here, which is: is the AI intrinsic, or is it reactive? Does it have its own motivation or is it just a tool? And I think it feels easy to sort of draw these as entirely distinct categories, but in practice, I think, especially as this technology improves, those lines are going to blur, and you’re going to end up with people on the different side of that opinion. So the original idea here for how you could sort of tell the difference between “Okay, what is an intrinsic intelligent being and what is just a better computer system?” It’s this test devised by Alan Turing – very famous computer scientist. There’s a movie with Benedict Cumberbatch, you should watch it. After World War II, [Turing] basically invents theoretical computer science, and after doing that, devises this thing called the Turing Test, which is: imagine you are conversing with someone – let’s say via letter. And you have no idea you know whether they’re a person or a computer or an artificial intelligence. Would you be able to tell the difference? And for the 50 or 60 years after that, that was the really big question, and sort of the agreed-upon test for like, “What is an AI that has really intrinsic, human-level intelligence?” And it’s really interesting, because in the last two years, we’ve more than crossed that line. Like, I guarantee you, each and every one of you could hold probably a 20-minute conversation with an AI, and unless you have a little bit of a technical eye to know what to look for, you could not tell the difference. You could engage the AI on any range of topics – I guarantee you that it probably knows more than you do, because, again, it has seen everything. And yet, you know, I know how it works. I know that it’s just a statistical prediction – you’re just feeding all of this data and your input in, and it’s predicting a response. So I think it really begs the question: if something appears to be intrinsically motivated, where is that line? It is? Is it not? Does it come down to the technical details? What if it professes to be intrinsically motivated? You know, how do you handle that? It’s unclear. Judy Spencer: But the AI is still, at the moment – it’s still being fed information, right, and then it synthesizes that and it comes up with something new. Is that sort of the idea? Like a child learns? Josh Gruenstein: Yeah, exactly. I am reactive; I receive light to my eyeballs that translates to electrical signals to my brain, and then in response to that input and to all of the data that I’ve accumulated over the course of my lifetime, I produce an output. And people believe me to be an intrinsically motivated human being. And you know, I don’t know – why is that not the case for something that works pretty differently under the hood? Judy Spencer: And that line keeps getting blurrier and blurrier. Dan Geffen: So – by the way, we’ll have a chance for question and answer, I think towards the tail end of things. But you know, I was thinking about the Hebrew word that I mentioned before, and I think just to close the loop on it, because, you know, we think about the word “artificial” – when I looked it up, for example, it’s one of those words that you use all the time, but you use it in context, so you don’t really ever think about its actual definition. You think about Splenda, right, you think about an artificial sweetener. Or you think about something along those lines. And what the definition pointed out is that it is indeed referencing a creation of something that wouldn’t occur naturally on its own. But it always has the sort of the human-creativity elements embedded within it. And I think that that actually is the part that the rabbis are the most generally focused on when they have something to say, is when somebody is creating something that the natural world would not otherwise have created had a human being not been involved in it. And one might think, in a vacuum, that a religious-minded person would be against that concept, because to do so would be, in effect, again, taking a position against God. If God is the greatest thing ever and provided all this stuff, why do we need to create any of these other additional pieces? But this is where, again, the genius of the rabbis is. That’s not their perspective, right. When they’re concerned about the bad, they’re concerned about abuse. They’re not necessarily concerned about the idea that human beings create something that otherwise would not have existed. One of the most important things in Judaism – we say it every Friday night – is, we do what? I’m holding my hand up. We do kiddush, all right. What is the kiddush blessing? “Borei p’ri hagafen,” right, the fruit of the vine. But what are we actually having? What are we actually consuming? We’re consuming either grape juice or wine. That’s not a naturally occurring thing. It’s something human beings have to be involved in, right. The blessing we say over bread, hamotzi lechem min ha’aretz, is not talking about the loaf of bread, it’s talking about who brings forth bread from the earth. Bread doesn’t just appear from the earth. It’s not fruit, right. It involves human beings tilling the land, growing the wheat, all the things that are necessary. So all of those sort of creative acts, those things that we as human beings do, are not inherently bad. The question is, again, about usage, and about limitations on that usage, and having perspective on that usage. And I think that, in the end, part of what Josh is talking about, is at this stage, where so much of it is still not on the technical side – theoretical. But all of its usage is theoretical, we haven’t even begun to scratch the surface of it. My guess is that in the first 24 hours after OpenAI became available to a lot of people, you saw it being used in ways that even the people who wrote the algorithm wouldn’t have ever conceived of. And I bet you one of the things on that list was rabbis writing to it, asking them to write a sermon – I guarantee you, right. In all the predictive models, that’s not their number one concern. But the concern, then, becomes if rabbis are doing that, then what does that mean for us as the Jewish people? If rabbis are now relying upon a predictive algorithm to write a sermon that we think, predictively people will like, and therefore the rabbi will be liked because of it, you can see the ethical slippery slope there. All of that stuff, it’s not even just about saving time. It’s that if something that is computerized can produce a better thing than I can, and nobody else needs to know that I am using that thing, that’s what the rabbis are concerned about. What do you do when nobody else knows you’re doing something ethically wrong? And how do you judge upon that before you do the act, so to speak? So again, all of this is really just scratching the surface of the ethical conundrums that will come, I think, in ways we can’t even really predict. Josh Gruenstein: Yeah, I think that’s really fascinating. In my head, I think I kind of break it down into sort of two categories: there’s abuse in the usage, and sort of improper, or maybe I would say, use of AI today that is not really thoughtful. There’s a fun story, which is: you have a bunch of sparrows in the nest, and one of the sparrows has a great idea. “Oh, you know, wouldn’t it be amazing if we were to go find an owl egg and raise an owl? And it would be so smart, and we can ask it for advice, and it would go out and hunt for us and care for us, and wouldn’t that be an amazing idea?” And everybody gets super excited about the idea of, you know, having a little owl servant, which would soon become a big owl servant, given that they’re sparrows. So all of the sparrows go out and leave the nest and start looking for an owl egg, and then a few sparrows remain in the nests. And they start to realize, “Well, hold on a second. Have we considered how difficult it might be to train the owl? And have we considered that the owl is a lot bigger than us, and it might do things we don’t expect? And how much time do we actually really have until the other sparrows come back with the owl egg to figure this out?” And again, this sort of puts people on two sides of the aisle. You have people who were absolutely major figures – you know, the developers of this original AI technology – who are of the belief that we should actually just stop working on it entirely, that until we can be more thoughtful in the way we use the technology today, you know, sort of at the state where it is, we should not be trying to build something where the risks are even higher, and possibly higher to a level which would be much more melodramatic than the risks that exist today, which already are quite real. The sort of example that comes to mind is everyone’s phone. Today’s phones work mostly, actually, computationally. Like, if you notice that your phone camera has gotten, like, crazy good over the last 10 years – you used to have to schlep around a big huge professional camera, and now your phone is better? That’s not because the hardware is better – there’s actually this AI technology that basically takes relatively bad photos from a small sensor on the camera and upscales them. And the problem there ends up being that, again, because these are data- defined systems, all of these approaches are biased. So right now, most photos floating around that are generated by phones are actually sort of racially biased, in the sense that they will take much better pictures of white people than most other people, because that’s the people who built them, and also that’s where the training data came from. Similarly, if you search on your phone for “gorilla” in your album, it will not show results, even if you just went to the zoo and took pictures of a gorilla, because none of the major phone manufacturers could figure out how to stop an AI from responding with photos of African-Americans when you search for “gorilla.” Because again, the internet is not a very friendly place, and there are unfortunately lots of examples of hate speech and hate media online, facial recognition is now something that is used by basically every major police department. Accuracy rates for minorities with facial recognition are 50% lower in many cases with the accessible technology for minorities, in a system where minorities are already targeted by police in the United States. And then also in China, you have whole companies that are spun up to sell facial recognition software that identify racial minorities that are being oppressed. And then let’s say you’ve been convicted, the software that helps people today – helps judges figure out recidivism policies, and predict[s] the likelihood of recidivism for an inmate – are also tremendously racially biased, and actually mathematically provably racially biased. It’s impossible to predict that and not be racially biased, because of some fun theorems – well, not very fun theorems. So already, throughout your daily life, these are actually pretty major and impactful decisions are being made sort of for you using AI that is broken and biased in a way that doesn’t bother so much the people who set these systems up. And that’s today, with the technology that is very new and has had very little time to be adopted, and is about to get radically better. So you know, I don’t know if I necessarily agree that the owl is going to come back and kill everybody, but I definitely agree that sort of it’s a mistake to not be thoughtful about what the impact is going to be. Judy Spencer: So maybe the sparrows need a plan, a better plan than they have right now? (laughter) Because what you’re talking about with the facial-recognition software – I did not know this before you said it. But I also did not know about the Tulsa race riot until I saw it on a TV show, on Watchmen. And I was not alone, including my daughter, who obviously is younger than me because I gave birth to her. And that is because everything gets filtered through bias. What gets on TV, what goes in a book, what gets read in a school. So it feels as though the thing that I personally struggle with with AI – because I love technology, and I love gadgets, and I like new things, but I’m finding it harder to know what is real and what is true, because you get into an echo chamber sometimes, so that the people who are interacting with the AI and sort of teaching it, much as one would teach a child, are teaching it the same bias, the same lack of knowing about the Tulsa race riot, the lack of understanding of he difference between a gorilla and a human being. If that keeps perpetrating, then all the good that AI could do – and I’m essentially hopeful, and feel it is a tool, and tools are amazing – I mean, what AI can do for people who are disabled, people with spinal injuries – there are applications for this that are amazingly beneficial. So what’s the fix – religiously, ethically, spiritually and technologically – to get the sparrows to have a better plan? You’re on the hot seat now. Josh Gruenstein: It’s not easy. I think because of the way the technology is built, again, sort of pulling from the data that it sees around itself, there’s this obvious natural tendency to reinforce the structure of the system that it lives in. It’s going to replicate and amplify any patterns that it picks up on, for better or worse, and in the context of this conversation, for worse. And one of the challenges is: let’s say you have an AI, let’s say it’s a text-generating AI, somebody you can talk to. And it’s biased in some sense. Let’s say it’s racially biased. How do you then take that system and make it not racially biased? That’s obviously the goal of all of the very smart people working on this project. And it turns out it’s hard in a lot of ways. So the primary way that’s being used today is basically human feedback, which is [that] the major companies spend billions of dollars a year to hire contractors who will engage with the AI and basically rate it on how well it is following a set of social norms. You know, did it say something offensive? Did it say something partisan? And that’s not the role, that’s not the company training it wants to do. And this seems all fine and dandy, but then the question becomes “Okay, who are the people who are actually training with them?” It’s not actually the people in this room. You wouldn’t want it to be the people in this room, because why should we get to say what AI can do? In practice, it’s the lowest bidder. You know, the biggest AI company today pays two dollars an hour for Kenyan contractors to label data, and they get to decide “What are the social and moral and ethical norms that AI should follow, for better or worse?” There are other approaches that people are doing. You could sort of try to build a set of rules to follow, and this is really interesting to me, but it falls into a similar trap, which is like, “Okay, what’s the rules?” You know, who gets to decide? Where is the list of moral and ethical and social principles that are universally agreed upon and correct, that an intelligent system should follow?” There are a few, like Judaism, right – religions are a good source of these frameworks. You have laws that are good sources of these frameworks. But frankly, I think you would run into trouble if you try to make the all-powerful AI follow Jewish law. It wouldn’t get a great public response. And even if you did, it’s all sort of nonsensical and mathematically impossible, even at a base level. Going back to the recidivism prediction system, because that’s like a relatively simple mathematical model, the response of the company that developed that algorithm was “Oh, our algorithm isn’t biased,” even though it predicts that Black people are going to be repeat offenders 30 or 40% higher, just on the basis of race. It’s actually exactly as accurate for black people as it is for white people. And unfortunately, that’s because if you actually look at the underlying recidivism data that that model was trained on, and it is true that there was a statistical difference, that actually if you were to control for all the other variables just by analyzing the value, that is what you see in the data. And if you were to try to correct for that, let’s say, you would actually end up having a greater error in classifying white people, whether they would be a repeat offender, than Black people. And there are mathematical theorems, which sort of show that it is actually impossible to be unbiased in a statistical estimator. You know, you are always going to be biased towards something and someone. So the choice ends up being, sort of, what is that bias, and who gets to decide? Which I think people have not yet figured out. They’re, again, using the lowest bidder. Dan Geffen: I mean, I have many thoughts now, but I think about how I grew up watching The Simpsons, which may be less relevant to some of the people in the room, and there was a spin-off show called Futurama, and it was sort of a Rip Van Winkle scenario in which somebody from our era ends up far in the future. And at that point, the robots are sentient and they create their own religion that basically looks like the religion of today, but a bunch of robots worshiping a robot God. So there is, even in this scenario, where you’re playing all the way out the question of whether religion has anything to say in helping to constrain, perhaps, the worst potential usages or to address some of the biases. But I think, actually, everything that you just said about computers is also true to a large degree of halacha. Halacha has all sorts of things that are built upon a certain kind of idea or a certain kind of bias, largely from an era that is not concurrent. And so the question is how you, over generations, can filter out, either by limiting the application of something or by explaining it away in a different kind of way, that you essentially can’t erase the piece of law, but you can do all sorts of things that make it so that it basically never actually gets used for that purpose. And I think that if there is one thing to be learned from Judaism that does apply in this scenario, it’s not something specific, necessarily, but is more of a general way of thinking, which is, you know, as you pointed out, the people who are saying “Just stop working on it altogether for now,” what are they saying, as you said? They’re saying “Stop working on it for now because you don’t yet know what to do with it,” right. You’re not thinking deeply enough about this thing you’ve created before you start using it, and therefore the outcomes are very unpredictable. And more likely than not in an unpredictable scenario, they tend to err more towards bad than good. So again, that’s why there’s a general concept in halacha which is called siyag l’torah. It means basically to create a fence around the Torah so that a lot of the laws that we are familiar with in Jewish law – practicing kashrut, Shabbat, all these things – they’re not actually the first line of law. They’re a protective layer put around that first-line law, because the assumption of the rabbis was that the average person was not going to be educated enough in the law itself, that they might inadvertently violate something. And so they create a barrier that prevents that potential from happening. So of course, the challenge is, in halacha, you have a closed system with a fairly limited number of people, usually, in our history living in close geographic area to each other. You can create a situation – a rabbi creates a teshuva, a termination of a practice, and everybody gets in line and follows it. And sometimes there are disagreements. This Rabbi says this, this Rabbi says that. But when it comes to the most substantive things, there’s usually a certain continuity. In the world of the internet, there is no possibility of that, right. Even the attempts to create some kind of governing bodies always butts up against the thing that created the internet in the first place, which is the idea that sharing information is good for humanity. And so when you come into this conflict between not necessarily all bad or all good, but how do you filter out the bad as much as possible? Identifying the bias, in order to try as best as you can to limit its impact, if not erase it altogether is usually going to be the better route than simply saying “Don’t do it at all.” There are certainly plenty of things in the Torah and the Talmud that are just, “Don’t go anywhere near it,” right, “Stay away from it and you’ll be better off.” But the vast majority of it is not that. It’s really mostly the rabbi is trying to figure out, in the average day-to-day life of the Jewish person, when you come into contact with things that present an ethical conundrum, here are the sort of road map of ideas that will help you to determine right behavior versus wrong behavior, generative behavior versus destructive behavior. The reality is that actually the rabbis were doing a lot of this work long before there were computers. The Talmud itself is a giant hyper-linked document. To study the Talmud before the era of computers, you needed to actually know multiple different sources, documents, and you have to know how to look from one to the next and back again. And that was the constant practice. So the rabbis really have been doing this for generations. And so when I think on it, and I think on the question of what would rabbis think if there is something life-changing – so Judy gives the example of people in the medical world. Think about how much has changed in medicine because of the capacity of AI, and what will change in the future. It’s impossible to even begin to imagine. But it doesn’t always have to be that. Think about just Jewish learning, for example. If you were to to talk to any of the great rabbis of history and ask them one of the great goals of what it means to be a Jewish teacher, they would tell you, right, it is to teach every child to teach every Jewish person to know what Judaism is, to live according to its practices and its rules. And if you have a computer, for example, that can do that extremely effectively. If you have an AI that can help a teacher, for example, to teach the children in a better fashion, then theoretically that is a good thing, right. That helps the world to be better. It helps little Jewish kids to know more than they would just relying on their rabbi’s vulnerability. So I think that rabbis, in general, when they look at these things, it’s not like a scale weighing against “if it’s good enough, then we’ll deal with the bad,” it’s really much more, I think, of what Josh was talking about – what’s actually happening on the ground right now? Or in the internet world, it’s a lot of people thinking really deeply about “What is this thing that we have created, and what is it meant to do, and what can it do?” And I haven’t yet seen Oppenheimer, but I imagine a lot of you probably have. So in a similar vein, right, when you’re thinking about the people who created a nuclear weapon, whatever reasons they may have had at the time, who then look upon the experience after those bombs are dropped, the perspective they have is obviously very different than the perspective that they had at the beginning. And even again, with all the caveats, all the balances, there’s always going to need to be an ethical conversation that goes alongside the scientific conversation. And so long as those two things are in conversation with each other, I think that’s the thing we need more than anything else. But I think the world of Judaism will be changed as a result of this, in ways that we can’t necessarily even predict. But I like to, again, take the more optimistic route and say we’ll find the better things to use it for than not. But at the same time, let’s take it seriously, and let’s think deeply about it. Josh Gruenstein: Yeah, it’s fascinating hearing you talk about all these strategies the rabbis think about to get the halacha, because it’s almost like every one you mentioned maps to an algorithm that people are using. Like, okay, [to] dumb it down, have the – you know, “what can you say standing on one foot?” That’s, you know, three laws of robotics, or prompt engineering, to make things shorter, having feedback and, you know, “don’t dictate the behavior, have a response.” That’s an “actor critic”, it’s an algorithm, theoretical reinforcement learning. It’s funny, there are no new ideas. But I wanted to reiterate, I think I’ve been a little bit doom and gloom here, and I do want to say, I am working mostly – I don’t work on AI safety, I don’t work on AI ethics, I work, in a pretty direct way, to accelerate AI, at least in a narrow context, which is sort of in manufacturing and in factories. And I would say I am generally a believer that there is a lot of upside to be had here. Like, obviously there are downsides, and we’ve discussed it, but you know, the world of like, Star Trek – of no more hunger, and people can just explore, and that can be you know the motivation for the things that we do – that is something that really only happens in a world where, you know, you can produce and manufacture at basically zero cost, and sort of the intellectual information technology or whatever, all of the thought work that goes into running an economy and a society can also be automated, so it can be a hundred-times leap in the productivity and sort of the human experience. We just have to sort of do a reasonable job and make sure that it gets shaped in a way that we believe. Judy Spencer: So I know, in your business, you were — I read on your website about the collaborative robots, co-bots. So you know how people often panic about “Oh my god, the robot’s going to take my job,” and usually I always feel like the job that the robot gets is the part of the job you don’t want to do, that would then possibly free you up to do a more finely tuned or more creative aspect of that job. Are you finding that in your work, when you go into the factory? Josh Gruenstein: Certainly. Just where the technology is at such a point where when I go into a factory, and I’m looking for jobs that our robots can do, it is always “What is the absolute most mind-numbing job that is in this factory, the least skilled work?” And I had a customer describe this to me as “sleepy jobs.” They were actually a worker in the factory. And yeah, I don’t know. I think with all technological progress, there is change, right. And that’s certainly true here. But I think in a few ways, it’s changed what I believe in. One way is: this is something that’s sort of not new. We have sort of increased the level of industrialization in American factories, in American industry, sort of pretty continuously since we’ve had industry. And it has not been catastrophic. So we understand, roughly, actually, how that works, and what the societal impacts are. And society is set up to deal with that, maybe contrasting a little bit with some of these social systems that we actually don’t really understand how they work or how they compound together. I think there’s also the angle of – we are at a point where, I don’t know if people know this, but the unemployment rate in the United States plastics manufacturing, I believe, is 0.1% today. So normally today, actually, the impact of sort of lack of ability to automate is less, you know, okay, “Good for human workers.” It is good for human workers. It’s undeniably good for human workers. But it’s also really bad for American businesses. There are a lot of really good things going for American factories, most American factories. 98% of American factories are small businesses. They’re mostly family-owned. And they’ve had a really rough time for most of the past 50, or maybe not 50, but 30 years or so. And the tides are starting to shift in their favor because of a bunch of geopolitical factors, and most of them find it sort of impossible to hire people. The average age of a worker in factories is maybe 60 years old or something like that. Basically, everybody is about to retire. And we’re about to have a pretty major crisis where, on the one hand, we are re-shoring, and we’re starting to move more production back to the United States and it’s becoming a political priority, surprisingly, sort of across the board to go do that, and at the same time, very few factories in the United States have automation, and basically the entire workforce is about to retire. So I think what I’m doing is generally positive. I believe in it, I interact a lot with the line workers, the factory workers, in these factories, and I don’t know, we have good relationships. Judy Spencer: So if you want to look at the future for a moment, there’s a lot of bad press about AI. Everybody’s waiting for – it really is like HAL 9000, where everybody’s going like “What is going to happen to us as people? Are we going to be replaced? Are we going to – you know – are the robots going to get sentient and take over?” Is that a real fear? Is there a doomsday scenario, or is there another way to look at it completely? Josh Gruenstein: They are real fears. I don’t want to minimize anything. It is sort of true, inevitably, that with new technology, people will be, in some sense, replaced. You know, we don’t really have – I don’t know – typists so much today, as a dumb example. You know, that’s creative destruction. That’s mostly the primary mechanism by which our economy changes and is able to grow over time. I think the other risks are much more interesting, but also much more speculative. Like the bias risk, right – these AI systems have all these failure modes which most people aren’t aware of. They exist today, they’re being used today. How is that going to play out? I think we’re going to be fine. People are paying attention, and most people in my line of work care a lot about it, but we still don’t exactly know what’s going to happen. And then the absolute doomsday scenario of, you know, Terminator – I guess I’m in a unique position to talk about that. That one’s probably not going to happen. I can attest, I think we have probably some of the most advanced artificial intelligence robotics that exist in the world. I know all of the researchers who are also working on this, and trust me, we’re not there yet. It’s going to be another, I don’t know, 10, 20 years, until we could even be close to there. The thing that could actually happen is: could you have an artificial intelligence that is smarter than a person? Forget arms and legs, because it turns out those things are actually a lot harder. Could you have somebody which could just outmatch you? Forget chess – could they do a better job than you at any intellectual work that you could possibly imagine? And that’s an interesting question, because that carries its own sort of family of scenarios. The classic example is, you have a super powerful AI, you put it in charge of your paper clip factory, and you say “Your job is to maximize production of the paper clip factory.” And that sounds innocuous. And then you realize that, you know, wait a second, humans are made out of atoms, and atoms could be used to make paper clips, so wouldn’t it be efficient if we were to convert all the humans into atoms? Or, you know, “Hmm, what is the biggest thing going for paper-clip manufacturing right now? Well, it’s probably me, the super powerful AI. And the biggest risk, probably, to that is that some human comes along and decides to unplug me. So let’s make sure that that doesn’t happen.” So it’s very easy, very quickly, for incentives to become misaligned if you are not precise in the way that you calibrate them. And this is interesting because it applies to the technology as it exists today. You know, it matters a lot if Google Search is aligned to our social and moral and ethical values. And that’s something that people use today. And it’s not science fiction, but sort of that exact same technology is going to scale for, like – what’s going to make sure that the paperclip maximizer doesn’t you know take us all out? You know, Blades of Glory? So I think that’s going to be really interesting to see play out. But I think the actual risk of something going wrong is quite low. Dan Geffen: Now I’m sort of terrified of paper clips. (laughter) It’s fascinating also because, you know, I’m thinking again about how important words are, and what each of these words mean. And you think about the concept of intelligence, and it’s on the level of knowing more, right, or being smarter than – I feel like that, to me, if things continue apace, that’s an inevitability, right. Like, that’s a guaranteed reality at some point or another. Just based upon the nature of especially crack quantum physics and quantum computing, it really is very conceivable. But there is a distinction, at least in Judaism, between intelligence and wisdom, right. You can meet somebody who’s unbelievably intelligent, but is not wise, and vice versa. So that’s, to me, the fundamental question of what inevitably comes up. As we were talking about, right, in every profession, everyone looks at this thing and says “As an idea, is this going to be better than me at what I do?” And I think that the reality is that the rabbi has a lot of parts of the job [AI] probably would be at least as good at, if not better than, in certain cases. And I know as we were discussing before, that there are many rabbis from my generation who I think come from technology, and love technology, and embrace technology and look for ways in which technology can help us the same way as in the factory, right, to focus on the higher level question or function or thing that we ought to do, as opposed to the lower-level pieces of it. So you know, again, it always comes back then to the sort of the ethical question of “Is the value of intelligence greater than the value of wisdom? And is the value of wisdom greater than the value of intelligence?” And I think what, at least from my study of Judaism, is that they never come back and say one or the other. And it’s really always actually about understanding that on a surface level, what we define as intelligence or what we define as wisdom is oftentimes not that. And that there is something beyond the statistical table, as it were, that thing that we call life, that way in which everyone in this room, I’m sure, has experienced the power of a computer doing for us what we thought was impossible. The Jetsons – I grew up watching reruns of it, but that was my image of the future also. But at the same time, we’ve all been to – I mean I’ve never been to the Grand Canyon, right, but I’ve been to the equivalent place in the world in which you stand in front of the majesty of the universe and you realize no matter how brilliant the computer is, there’s always going to be something just beyond. But I’m also someone who plays video games. It’s my thing, that’s my pastime, as it were. And speaking of a technology that has grown from something childish and ridiculous to something that literally one of the newer games will come out is 10,000 procedural worlds that are created depending on where you go, including the characters that involve themselves in it. So even if the physical world can’t create that, in the digital world people are creating entire universes just from ones and zeros. So in the end, while we can talk about the future and the problems of the future and the questions of the future, the reality is it’s very much right now. Because if we don’t talk about it now, then whatever it is in 50 years or 20 years – I mean if you’re talking 20 years until Terminator, now I’m really freaked out, I thought at least, you know, maybe my kids’ kids would have that issue – but this is the reason I think we’re all so engaged in the conversation right now, is it really is unprecedented in so many ways. But I think if human history has taught us one thing, we are constantly affected by things that in a generation feel unprecedented, something we’ve never seen before, something inconceivable, right. To talk a hundred years ago to somebody and say “Hey, by the way, you see that thing up in the sky? We’re going to send people up there someday,” they would have told you you were crazy. They would put you, you know, in a cell somewhere. So a lot of this is also a reality of humans – I think that our challenge is, we are finite, right. Our time ends at some point or another. Passing things on to the next generation is a requirement for us. If we don’t do it, it doesn’t happen. In the computerized world, right. And I think, Josh, what you’re saying is so much of the learning takes place based upon that which has happened. That’s the other challenge of the bias, right – if that which has happened has been fairly problematic (our last thousands of years of existence haven’t exactly been peachy the whole time) to draw to another relevant conversation of the day, it’s what happens to a child who is given a particular kind of textbook without any access to any other thing. They’re going to believe that textbook is the truth. So the question I was going to pose to you, Josh, was like, if we think on that model, right – you’re the 18-year-old kid who goes off to college now, and learns all these other things that counter what you were taught as a kid – is there an equivalent for that for AI? In other words, can you create the learning in a gated system of a sort, but then let it loose to then compare what it’s learned within the structure to then what it finds in the outside world, and then have to do what most of us do, which is reconcile what we were taught as kids from our authority figures versus that which we’ve experienced with our own eyes? So I wonder if that is a thing that exists. And if not, I’d like to trademark it right now on the camera for everyone to see. Josh Gruenstein: Yeah, that’s quite interesting. The closest concept I can think of is something in the field of reinforcement learning called curriculum. So my sort of research focus, and what I did research on, and what I lectured on, was basically this family of algorithms where you are giving positive and negative feedback. So it’s not “Okay, we’re looking at all of the data in the world,” it’s, you know, “you did a good job or you did a bad job,” and this is sort of an interesting pathway that could get you out of this trap, right. Of like, “Okay, the things that have happened are not necessarily the things that we want to continue to happen.” So the idea of curriculum is: let’s say I want to train a robot to walk. And I want to train it to walk in a simulator at first, because maybe I don’t have that robot. The robot might learn to exploit the physics of the simulator – clip through the ground and sort of wiggle upside down to its target. And it’s not a hypothetical, it’s happened to me hundreds of times. This is a really big problem in my research. And one of the tricks that you can do to get around sort of an AI learning a thing that you don’t really want it to learn is you come up with sort of a schedule of what you want to learn when. And you say, “Okay, first I want you to learn how to sort of lean forward, and then I want you to learn that you want to move your legs forward, and then I want you to learn that, you know, maybe so we’re pulling yourself forward.” And through maybe five or six steps of curriculum, you’ve sort of pushed the algorithm in a direction without actually having to tackle the really hard thing upfront. And I think this is something that actually people are not yet doing for this family of models, because it’s technically challenging in a few ways. But I imagine it’s something that people will start to do, to be like, “Okay, here are the core concepts that you really should know – this is what you should know, if you have to learn it standing on one foot.” “This is the stuff that, okay, is a little bit more advanced, and this is the stuff that’s slightly less important.” Maybe if you teach an AI that way, it can be better. I don’t know, that’s not something people have tried really. Dan Geffen: I’ll just say you know the failure rate for parenting is very high (laughter), so I don’t know that we have anything greater. Josh Gruenstein: Yeah. I have devoted probably millions of years in simulated time training robots, so I’ve failed as a parent as well. (laughter) Judy Spencer: You haven’t potty trained a child. Dan Geffen: Like, there’s the idea for your title for your book: “Potty Training Your Computer.” (laughter) Dan Geffen: So do you have more questions? Judy Spencer: I think we should open it up to some Q&A. Audience member (Stephen Rosen): Okay, hi, this is really wonderful, you guys. All of you are terrific. Thank you very much. So I’ve often thought that there are two kinds of genius: the kind that’s just like us, only a lot smarter, on the one hand, and the kind of genius that’s from another planet, like Mozart and Einstein. Is AI more like Mozart and Einstein, or more like us? Josh Gruenstein: I think that’s a really fascinating question, because it’s absolutely both in a sense, because it is sort of almost an amalgamation of everybody and everything. It is completely us, right. It is not otherworldly. It is sort of the most perfect encapsulation of our world that you could, that an algorithm could, find, right. And in that sense it’s relatable. In another sense, it’s not us, right. It is some other thing. You cannot really reason about what it’s doing. And again, sort of, this line between “is it intrinsically motivated? Does it just look like that?” If I’m holding a conversation with someone, I might sort of infer and assume, “Okay, this is how this person is thinking.” And you know, I believe there’s something going on there. And when I’m interacting with an AI, that’s just not the case. It’s thinking about it in a completely different way. And I think that’s scary, in sort of a really foundational way, because it could do something completely unexpected, because it’s just not following the same rules that we are. Audience member: So the question is, could someday – a hundred, a thousand years from now – AI become a Mozart or an Einstein? Josh Gruenstein: I would argue that in some ways AI is already a Mozart or an Einstein in that certainly you can come up with tasks where AI is, far and beyond, better than a person, and probably you could define tasks that are creative where that’s the case as well. Audience member: Is that an optimistic view? Is that shared? Josh Gruenstein: I think the trouble is, there has not historically been really great collaboration between the social sciences and this field. It has always been very sort of niche. And actually, there’s a little bit of a rift right now, where you have all of these very core AI people who are starting to ring the alarm and be like “Okay, this could be a problem.” And all the social scientists are being like, “Well, come on we’ve been trying to tell you guys about this for, like, years, and you ignored us, and you fired us from your companies because we were talking about ethical risks and problems. And now suddenly, when there’s a potential that it could affect you, we’re talking about it?” So I wish that there was more work done that was collaborative to answer those sorts of questions. I don’t think there’s consensus, and I also think, frankly, there are very few people who are qualified today to really think deeply about that question, which is a problem. Audience member (Stephen Rosen): I have another question: haven’t we been here before, when we had to address the question of the hydrogen bomb and the atom bomb? And I’ve just seen the movie, which I recommend to everyone. But I think there are a lot of issues. Let me ask you: to what extent are these parallel issues? We faced political, ethical, moral, religious questions about developing the atomic bomb, and then we faced it again when we debated whether or not to do the hydrogen bomb? So what parallels do you see and what parallels are there not? Josh Gruenstein: Yeah, I think those are all real – I think in general, building a technology that we don’t fully understand and sort of being pushed to put it out into the world, is certainly, I think, the case here. I think a lot of things are very different. First of all, you know, AI is not a bomb. I think if you look at it in a probability [sense], there’s this notion of expected value. You know, what are the possible outcomes, multiplied by how likely they are? And I think the expected value for AI is truly enormous, right. That to me, I think of Star Trek. And that’s a good thing. And probably when I think of the expected value of hydrogen bombs, probably some people disagree, but I bet the consensus is that most people think hydrogen bombs are bad, and we would be better off if nobody had them. So that’s, I think, a pretty major difference. On the flip side, it’s very hard to make a hydrogen bomb, and it’s pretty – you know, it’s been challenging, but it’s reasonably feasible, to control who can make a hydrogen bomb. The United States has been reasonably successful at making sure that in most cases, only people we want to have the bomb have the bomb. This is certainly not going to be the case in AI, in that you know certainly any country could develop as powerful AI as any AI app that exists, and probably really any entity could. You have some companies today that have taken 10 billion dollars of funding, and even then, I’m quite confident that in three years, a gifted teenager with commodity hardware will be able to do the same thing. It’s, again, all the data. It’s all out there, it’s all public. Nobody has any special data. The computers are becoming faster every day, the algorithms are becoming faster every day, and now you can converse with something that seems like a human on a reasonably fancy laptop. So there is sort of no, I think, feasible option of non-proliferation, which there has mostly been with nuclear bombs. Audience member: A few quick things: first of all, when you comment on the hydrogen bomb, I mean, fusion could solve our energy problems forever. So it’s not all a negative. But the question I have with AI, which leaves me a little cynical, is that ultimately it’s a computer program. It’s coming off a zero one decision more rapidly than anything else. And isn’t really that anything that AI is analyzing is something that that a human has created or has been created before? Or it’s analyzing vast amounts of data and figuring out a way around it, or a new way to look at it, but it’s not creating in that sense, because someone else created the data that it’s analyzing? Josh Gruenstein: Yeah, I think to my knowledge, there isn’t really a rigorous definition of what it means to be creative or to create something. I think you could find people who would argue that creation is roughly amalgamation of the things that you’ve observed, or multiple concepts, with introduced randomness. I don’t know if there are scholarly works that pose ideas there, but under that framework, I would say that AI certainly already is creative. And there isn’t really a distinction. I’d also say, that you know, that AI is bits and boops and whatever, and ones and zeros, but to get a little bit abstract, so are you. You are following the laws of physics and biology. Some would argue that the laws of physics are just purely computational, right. There’s a list of equations which dictates sort of how the world plays out, and you are just playing your role in computing the results of those equations at the next time step. So I think to answer these sorts of questions, it becomes about how do you really rigorously define the terms, right? What does it mean to be creative? What does it mean to be even a computer? Again, from a mathematical, purely theoretical perspective, I think we are all computers. Dan Geffen: So just to jump on that, I think you know Judaism thinks in a similar kind of way about a lot of these questions as well. You know, think of the greatest artists in the world – the painter who paints, the sculptors sculpts the place we’re sitting in here, right – at some point or another, someone takes disparate items and brings them together to create something that otherwise would not have existed. That’s the human agency part of the scenario. So the ones and the zeros obviously are created by the laws, as you say of physics, right, fundamentally the laws of mathematics, all of those things are at play. But one of the things that Judaism does is even acknowledging the fact, for example that someone else said in a different frame, in Ecclesiastes, it says, “Ein kol chadash tachat hashemesh,” right, “There’s nothing new under the sun.” So obviously he’s speaking in very hyperbolic terms. What is the statement about? It seems to us as if nothing is ever new, right. Nothing has ever really been created, because it all exists as atoms and molecules reformulated into some form or the other. And randomness has played a significant part in a lot of that. But consciousness has played a significant role in a lot of that as well. You’re sitting in it right now. Every detail in this space was thought out and conceived of by multitudes of people to construct not just the idea in the mind, but all of the physical objects necessary to bring it into reality. And in Judaism, what’s the traditional response to all of these things, whether it’s seeing that great piece of art, or sitting in a beautiful sanctuary, or eating a delicious piece of fruit, is you always couch it within the logic of Baruch Hashem, right – “Blessed is God.” So even the most religious person, right now, sitting in the southern part of Israel in an IBM Factory working on the most complicated computational models of whatever it might be, they still say Baruch Hashem at the end of creating whatever it is. And they will say to you “I created whatever it is.” They’ll say – right, Jonas Salk, you talk to him, at some point or another in his lineage is this idea that every idea that comes to a human being has its origin point in God. That’s the difference, in the end, theologically speaking, between us and somebody who is maybe not a religious-minded person. But in the end, Mark, your question about sort of at what point is the creation of a creation, right? It’s a creation of something that facilitates somebody else creating. And that’s what’s happening right now, for example, in the strike in Hollywood. That’s the concern of pretty much every music artist that I know of, this was the issue when hip-hop started in the 90’s and they were using sampling of other people’s music, right. Everybody sort of tries to figure out at what point did somebody create something that was necessary for the next creation, and how to monetize that. But for most of technological history and advancement, that was not the way that things tended to work. The proprietary nature of advancement wasn’t really what most scientific-minded people focused on. They were trying to understand how to first understand the world they lived in and manipulate it to the greatest effect. But I don’t know that you take away the creativity of the computer creating, the AI-generated art that it’s creating. I think the question is, on an ethical level, how do you make sure the person who did create the thing that’s teaching that art project to create a new piece of art, how do they get what they deserve? How does the artist who creates a song that’s sampled by somebody else years later on, how does that original person get recognized for their contribution? And those, in the end, I think, are again, serious ethical considerations, alongside all of the potential of exploding ourselves or creating the Terminator is also a question of when AI creates something, did it create it, or did it simply amalgamate the creations of other people who were not getting either recognition or money for it? And that’s a question for philosophers, it’s a question for those in finance, it’s a question for the people, the artists themselves. And that’s going to be, I’m sure, a whole other avenue of discussion going forward, probably forever .I’m not sure there’s an easy answer to that either. Audience member: Thank you for a wonderful talk. You were talking a few minutes ago about the nuts and bolts of artificial intelligence. My question is: is it in any way, or to what degree is it based, on the way the brain works? Josh Gruenstein: Good question. So, sort of. The models that are being used are called artificial neural networks, so there’s clearly some degree of inspiration. It’s bits and spurts, I would say. Nobody took a scan of a human brain and was like “Okay, we’re going to build that.” It’s more like “Okay, we have some problem, here’s maybe roughly how nature solves it,” mostly. This was this guy in, I think, the 1950’s, called Marvin Minsky, who realized that basically the way that a neuron works, if you sort of simplify the model, is very similar to this fundamental mathematical operation, which is called matrix multiplication. It’s a really foundational primitive for building sort of all machine learning systems, all AI systems today. And then over the past few years, people have sort of gotten bits and pieces of ideas from other places to sort of help them get out of jams. But it’s, I would say, a pretty tenuous connection. And I think throughout AI, people have always tried to build it by looking at how humans work. And there have been a lot of very, very, very different-looking attempts to do that that have not worked so well. Audience member: We all grew up without AI, but there’s a generation that is now growing up with AI – eight-year-olds, 10-year-olds, 12-year-olds, who write the term papers for them and things like that. How do we stop a generation from – oh, I’ll use the word – being lazy, having AI do all the work for them, not necessarily thinking on your own? Josh Gruenstein: Great question. I think it’s two things. So one thing is, you know, they’re going to use it, and you’re not really going to know if they used it. It’s, right now, an unsolved technical problem. If I had to guess, it’s a pretty hard technical problem to figure out. And there’s a game, cat and mouse problem. You know the cat’s probably going to always stay ahead because it has a couple more billion dollars of funding. I’m not an educator, but definitely curriculum design and sort of how you are – maybe this is not my original idea, but you could have a student’s grade, a paper generated by an AI. Instead of writing a paper, you can have in-class assessments. I don’t know, but you’re not going to be able to ban it, right. That’s not going to work. You have to change the structure. On the other side, I think it’s going to become more and more and more important to focus on how we are educating students about AI, and about sort of the backing technologies, like computer science, mathematics and physics and neuroscience, and all these things that go into it. And I know there are really excellent efforts to make computer science education more universal. And I really hope that that’s going to pan out in the next five or ten years. Dan Geffen: So you know, again, as a parent, and as a person who’s straddling the generations, right, because I’m what’s called a geriatric Millennial – I believe that was the terminology. I was born in 1981, but I grew up in and around both schools and computers. As a result of having grown up in and around schools, and my dad started school in the city, and I remember that a lot of the same conversation came about when the Apple 2C came around, right, and they stopped teaching for example cursive script, and there was a whole generation of people who, you know, saw that as the end of society, because we weren’t teaching that anymore. And I was saying before, the same thing with the abacus and the calculator. And I remember, when I was in school, what you could use that calculator for and what you couldn’t. And they were still attempting to try to make you memorize the long formula, because that at the time was considered to be the highest level of the curricular goal, right, is that the mathematician needs to understand the formula and memorize it so they can use it at any time, in any way. And I’m not arguing whether that’s right or wrong. But what tends to happen, as Josh said, is that education tends to evolve alongside changes in technology, for better or for worse. And so one of the questions that I would ask back to that question about, for example, term papers, right: if the computer can write the term paper and you can’t tell whether they did it or not, so that assignment no longer has the value it had before. So now your question as an educator isn’t “What can I do to make the term paper work again?” It’s “What can I do to make sure that I have a proper assessment for my student to make sure that they understand the concepts that that term paper was meant to be written about?” So for example, instead of spending more time memorizing the exact date of the Magna Carta and the names of every single king who lived in England from such and such time periods which my guess is all of you endured at some point or another in your life, you say, “Okay, so now we’re not spending that hour on those particular details, but we sure can draw conclusions about what was going on that led to, for example, the wars that affected all that gave the reason for the Magna Carta to exist.” In other words, analysis, as opposed to simply working in data. And I find that when it comes to education, it’s always a question of this: something comes along and disrupts it, and then the educators ultimately find some way to respond to it. But education is usually more a question of, “Are you successfully teaching what needs to be taught?” And when it comes to assessments, those have to change with time, because if we still assess kids based upon 100 years ago, we’d have a whole lot of other problems to be dealing with, for sure. But all I can tell you is my two-year-olds, two and a half year olds, can already reasonably use an iPad, so there you go. Audience member: I propose this last point: one of my colleagues at Brooklyn College has had an online discussion on this very issue, and I think, his wife fed some of the different kinds of questions he could pose, and finally he admitted that the WhatsApp, whatever it’s called, is too smart, and he can’t really finally control it. So he’s going to go back to written exams in class. And of course, the conversation, and the critical conversations […] being developed is analytical skill. So that’s so sort of pessimistic, if you will. I am a surgeon myself, and I’ve noticed that when I’m writing a letter, for example, that the program keeps telling me what the next word is going to be, the next set of words. In a certain sense, all the cliches are there for me. In that sense, the originality of my correspondence is being preempted by what’s generally done. And so I become just the average communicator, whoever proves that, anyway. That’s kind of the insidious thing that I’ve seen happening much more recently. Audience member: So I’m also a professor, and very concerned about the impact of AI, but I’m curious to hear from you what all these algorithms so far, what sort of degree of accuracy, factually, are they spitting out, based on what’s available – and how even inaccurate is Google, and say, ChatGPT? And do we have any way of checking that? And how this is going to have huge political implications? It already does. Josh Gruenstein: Yes. You’re not going to like this answer, but we don’t really know. And the problem is that normally, you can come up with – normally, in science, there are sort of agreed-upon and established benchmarks that you can use to measure the performance of a system, because it’s been in the public circle of discussion or whatever. Because this field is moving so fast, and because technically it ends up being sort of tricky to evaluate, there are no good standardized benchmarks for what the accuracy is of these systems. Companies maintain their own statistics internally, but they’re sort of known to be flawed, and a lot of the evaluation ends up being qualitative. And then again, the question ends up being, “Okay, who is doing the qualitative assessments to decide how good of a job the AI is doing?” And you know, you end up with a problem there. But to answer that qualitatively, they have known failure modes. So an AI is going to do really well on the beaten path, because again, it is just doing pattern recognition. It’s learning sort of the average existence of the universe. And if you ask it, again, what word comes after “happy,” it could predict “birthday.” You know, that’s pretty innocuous. But if you’re trying to engage it in something which it hasn’t really thought about before, or when I say “I hadn’t thought about it,” what I really mean is it hasn’t seen a public discussion on it before, that’s when it can start to go off the rails. And if you take this to an extreme, if it’s like – okay, let’s say you try talking to the AI with some random code that you just made up, you know, switch every third letter with every fourth letter of every word, right. Something you think it’s really never seen before. You have no idea how it’s going to react, because it’s never seen that before. And the behavior and response is completely undefined. They’re not engaging in reasoning, to be like, “Okay, let’s figure this out.” It’s just like, “If I have seen it, I will respond as I’ve seen others respond. If I haven’t seen it, well, I don’t have a behavior.” You know, we don’t really know what’s going to happen. So it ends up being extremely contextual, and you end up with very specific problems like, again, bias. And racism ends up being a pretty major problem. Political affiliation, political correctness, ends up being a pretty major problem. It’s these areas where we as humans don’t have sort of really great uniform norms, the AI tends to either have no idea what to do, or falls into patterns which, you know, are sort of the lowest common denominator. But I recommend, if you really want to know for the areas you’re interested in, how accurate AI is, the best way to do that is to talk to it and ask it about things and get a sense for it. Audience member: When you were talking about AI responding to something that it hasn’t ever seen before and gives us unorthodox or whatever responses, is that hallucination? Are they hallucinating? Is it hallucinating? Josh Gruenstein: That’s sort of, I think one of those words, again – this field, or this new technology, it’s really new, and all these terms are not yet rigorously defined. I think people use hallucination to refer to giving wrong answers which are plausible. I think they also use it to refer to giving answers which are sort of not plausible by any human definition. It’s really, I think, one thing that’s sort of becoming really obvious to me, looking at these AI systems, is there are a lot of layers to the media that humans generate. Whether that’s text or spoken word or images, it is possible for an AI system to get sort of the rough structure of the world correct and not understand the facts. And it is also possible for an AI to understand the facts and to not really understand the rough structure of the world. And we don’t yet have a good way of distinguishing when an AI is going wrong in each of these different ways. Judy Spencer: Okay, so what I’m hearing is that if we abdicate our personal responsibility in how we use this tool, we’re kind of screwed. Josh Gruenstein: Yeah. Judy Spencer: But that’s also true with every new invention, every new thought process, everything that we do. If we ingest political messages without thinking for ourselves and taking personal responsibility for actions and even for our thoughts, we are lost. Josh Gruenstein: Exactly. We all drive around in 2,000-pound murder machines, but we take personal responsibility, and it helps us as a society that we’re able to do that. So I very much agree with that sentiment. Audience member (Stephen Rosen): So a very smart physicist wrote an article back in the 50’s and the title was “on the unreasonable effectiveness of mathematics in describing physical reality.” And his point was that why should our brain be able to use some theoretical construct in our mind, otherwise known as mathematics, to describe the external world that is not that we’re not connected to? And he concluded that it was a gift. And my question to you is: artificial intelligence is a gift in that sense? Josh Gruenstein: Yeah, that’s very interesting, because the analogy I immediately think to is – so I work in robotics, and I specifically work in “How do I make robots smarter and better at interacting with the world?” And when you work on that, you sort of quickly realize that the world as it exists today is sort of built by and defined for humans. You’re all sitting in pews that are roughly designed for the human geometry. I’m holding a microphone, which can be grasped really easily with the design of my hands. And that’s sort of a natural construction from – you have the system, which is humanity and society, and that builds this other system, which people live in. And sort of my feeling, maybe, about the more abstract concepts you’re talking about, is I think of that very similarly, where you have mathematics and that is a structure, and a set of rules, and that’s going to get more systems that follow roughly the same structure. And I think, if you play that out until time, you have expanding fractal patterns based off of a very simple fundamental reality. Again, these AI systems, they are tremendously complex in what they are able to do, but they are actually astonishingly simple in how they work. Really, it is the most mind-numbingly basic mathematics. At the first year of, you know, a math undergraduate’s curriculum. You have more than enough knowledge that you need in order to understand how these systems work. It’s really not reaching for the advanced stuff. Which kind of to me like feels like a little bit of a signal, that it’s a little bit – I don’t know if it’s meant to be, I don’t know if it is, again a reflection of how people work, how mathematics works, but we’re all, at the end of the day, sort of following the same basic set of rules, the same basic physics, and you know, whether that’s a gift or not, I don’t know. It should not work as well as it does for how it works. It’s crazy. Dan Geffen: Okay, well, first of all thank everyone for coming tonight. Thank you, Judy, for moderating. Judy Spencer: Absolutely. Dan Geffen: I think there’ll be many more parts, Josh. Thank you so much for being here. I’ll just say as a last word – so I met Josh when I came here about nine years ago, and so I don’t know exactly how old you were when I first met you, Josh, but I knew when – how old do we think [you were]? […] I’ll just say this: I knew at that age, the kind of person that I was talking to, and how excited I was to see where he was going to go. So Josh, to be able to share the bimah with you and to talk in depth with things that you truly care about, this is actually the thing that gives me hope and comfort. If people like you, Josh, are working on this, that I feel much better. So I thank you very much for being here with us.
(This post is part of Sinai and Synapses’ project Scientists in Synagogues, a grass-roots program to offer Jews opportunities to explore the most interesting and pressing questions surrounding Judaism and science. Josh Gruenstein is the co-founder and CEO of Tutor Intelligence, which develops artificial intelligence for factory robots. Prior to that, he was a graduate researcher and lecturer at MIT, where he received his Bachelor’s degree in science and a Master’s in Engineering and Artificial Intelligence. Dan Geffen is Rabbi at Temple Adas Israel in Sag Harbor, NY. AI: Ally or Adversary? Understanding the History, Present, and Future of Artificial Intelligence was an event held there on the evening of August 3rd).
0 Comments