Are We Too Fearful of Artificial Intelligence?

Are We Too Fearful of Artificial Intelligence?

Is artificial intelligence really a new concept? Dr. Z. H. Rappaport’s article “Robotics and artificial intelligence: Jewish ethical perspectives” likens artificial intelligence to the “golem” – a mythical figure from rabbinic lore, created by a great 16th Century Rabbi to protect the Jewish community in Prague. The rabbi readily lost control of the creature he had created and had to destroy it, lest it hurt more people. Based on the parable of the golem, Dr. Rappaport concludes: “Ethically, not-harming is viewed as taking precedence over promoting good. Jewish ethical thinking approaches these novel technological possibilities with a cautious optimism that mankind will derive their benefits without coming to harm.”

The Golem certainly merits further consideration as a concept that can guide Jewish approaches to machine learning. The story derives from a passage in the Babylonian Talmud (Tractate Sanhedrin 38b) about the stages by which Adam came to life. In it, Rabbi Yohanan bar Hanina opines:

Daytime is twelve hours long, and the day Adam the first man was created was divided as follows: In the first hour of the day, his dust was gathered. In the second, an undefined figure was fashioned. In the third, his limbs were extended. In the fourth, a soul was cast into him. In the fifth, he stood on his legs. In the sixth, he called the creatures by the names he gave them. In the seventh, Eve was paired with him. In the eighth, they arose to the bed two, and descended four, i.e., Cain and Abel were immediately born. In the ninth, he was commanded not to eat of the Tree of Knowledge. In the tenth, he sinned. In the eleventh, he was judged. In the twelfth, he was expelled and left the Garden of Eden….Adam did not abide, i.e., sleep, in a place of honor for even one night.

With remarkable subtlety, this passage elevates the notion that Adam did not simply come to life in an instant, but in gradual stages. As such, between the third and fourth hours, Adam was a being without a soul – a notion that continued onward in the rabbinic imagination, ultimately inspiring the myth of the golem.

Lest one imagine that the golem is merely a Jewish concept, it prefigured far more widespread notions of a being without empathy or other humane qualities – in Mary Shelley’s Frankenstein, for example. Yet these works of fiction, as well as the underlying text in the Talmud or story of the Golem of Prague, do not so simply point to the idea that powerful, humanoid creatures are inherently problematic. The Golem broke the cycle of violence that Christians had set in motion against the Jewish community. The main problem with Frankenstein’s monster is that it lacked appropriate modeling of empathy and love. Even Superman derives from the figure of the golem as a protector and savior figure in times of need. If Adam was expelled from the Garden of Eden by the tenth “hour” of his life – and continued forth as the originator of all humankind – doesn’t this imply that even with great flaws, it can be worth venturing forth to make creatures with human (and even superhuman) intellectual might? If humans should exist despite the harm that they cause, perhaps artificial intelligence should too.

Dr. Natalie Rudolph, a scientist in the Boston area, and team member of Congregation B’nai Shalom’s Scientists in Synagogues project, reflected that many scientific discoveries feel risky or controversial until they prove indispensable: “Our lives are much better than they were a hundred or a thousand years ago,” she says, because of scientific discoveries, such as modern plumbing and sanitation systems, vaccines, and robotic surgery techniques. She suggested that much the same is true of artificial intelligence – a term that she believes is intentionally used as a more frightening turn of phrase than the technical term of “machine learning.”

Perhaps the real risk, then, is not the artificial intelligence itself, but our relationship to it as human beings. Like other scientific advances, it might bring with it the potential for human misuse. One need not read a science fiction book to imagine the destruction artificial intelligence could bring about in war and invasions of privacy – or how it might magnify our existing biases if we train them to see the world as problematically as we ourselves might. But artificial intelligence also has the promise of mental feats that the human mind alone could never accomplish, driving vehicles much more safely, rapidly researching medicines, and putting people out of harm’s way for dangerous and difficult tasks.

Maybe science is only as problematic as the people who use it – and we know that, on the whole, their inherent worth as beings far outweighs their risks.

(This post is part of Sinai and Synapses’ project Scientists in Synagogues, a grass-roots program to offer Jews opportunities to explore the most interesting and pressing questions surrounding Judaism and science. Rabbi Stanton is the rabbi of East End Temple in New York, NY, and a Sinai and Synapses Fellowship alumnus).
Photo by Enrico Martina


Add a Comment

Your email address will not be published. Required fields are marked *