Several centuries before Mary Shelley’s Frankenstein came to life in the human imagination, we had the Golem.
Our sages relate the story of a 16th century rabbi who brought to life a powerful being made of clay to protect the Jewish community of Prague. The rabbi soon lost control of the golem and had to destroy it, lest the golem destroy humanity. 1https://sinaiandsynapses.org/content/are-we-too-fearful-of-artificial-intelligence/
The myth of the golem lives on as a symbol of Jewish resistance. More recently, it has also come to frame the approach that Jewish ethicists take to the dilemmas of machine learning and artificial intelligence. The result is an emphasis on downside risk, framed by this cautionary tale of a new creation that causes more harm than good.
Scholar Z. H. Rappaport concludes from the parable of the golem:
Ethically, not-harming is viewed as taking precedence over promoting good. Jewish ethical thinking approaches these novel technological possibilities with a cautious optimism that mankind will derive their benefits without coming to harm.
This may well be so – but only in part. Dr. Rappaport and other scholars may be drawing the wrong Jewish parallel when it comes to the challenges and opportunities presented by artificial intelligence. It is not the golem to which we should look, but to the angels.
In the Torah, angels are depicted as simple robots who serve as God’s messengers. 2https://www.myjewishlearning.com/article/angels/ They give Hagar hope about her son’s future. They bear the news to Sarah that at long last she will become a mother. They call out to Abraham not to sacrifice his son Isaac. They are little more than Divine automatic reply messages.
But gradually, God allows them, or perhaps trains them, to take on new tasks. They wrestle with Jacob and enable him to become worthy of the name Israel. They withhold information to protect those in their care. They start to approximate moral agents.
By the time of the rabbinic period, our tradition speaks of four archangels who act as autonomous agents of God, surrounding the metaphorical Divine throne. Each has a name of their own: Michael, Gabriel, Uriel, and Raphael. Each has unique physical, emotional, and spiritual attributes.
These archangels adapt quickly, almost automatically, to changing circumstances and supplement human knowledge with their own.
These four archangels are programmed with limited choice and therefore cannot err. They can foretell the future because of their unique access to God. They can speak. They can engage and control their own physical forms. They do not replace human life but do challenge human beings to attain greater ethical and spiritual heights.
Most of the time, these angels exceed us. They are more precise, more aware, and holier in their efforts. They are programmed to do the right thing. But according to rabbinic literature 3For example, see BT Chagigah 16a (via Sefaria.org), on Yom Kippur, we can rise to the level of angels – and exceed them in one respect. It is in this aspect that we can likewise remain ahead of machine learning and artificial intelligence.
Today, on the holiest day of the year, we may forego food and drink. We may abstain from sexual intercourse. We may dress in white. We may even look something like flying creatures when we sway and bow during the Amidah and other sacred prayers. We, like the angels, recite the second verse of the Sh’ma at full volume this afternoon: Baruch Shem Kavod Malchuto L’olam Va’ed.
But we are not constrained to the same extent by our underlying code, our divine mandate, or even our DNA. Unlike the angels, who are programmed to be unerring, we can avail ourselves of the positive opportunity for change and growth. More than any other creature on earth right now, we can reprogram ourselves. Choice is the essence of our humanity and abounds with Divine potential.
We should approach machine learning and artificial intelligence this year from this position of confidence. These new beings are unlikely to replace us for the foreseeable future, even if they transform the world in which we live.
So what, then, is at the heart of our angst about artificial intelligence?
Those of us who use social media or stream television rely upon artificial intelligence to give us meaningful content. Computers regulate many of our thermostats at home. Our e-mails come with suggested words and phrases before we even type them. Computers automatically check for interactions between our prescribed medications and work hand in hand with surgeons during complicated medical procedures. Regulators pursue insider trading based on computer models of likely infractions. Yet we do not seem much bothered by these uses of artificial intelligence.
By contrast, the unveiling of ChatGPT this past year raised to new levels our worries about how advances in computer technology are coming together in transformational ways. Suddenly, technology could write as well as many of us, create new combinations of ideas, synthesize vast swaths of information, solve complicated mathematical problems, and even conduct original research. Could machines in time gain human levels of consciousness and become fully living beings of their own?
Perhaps, but I do not believe that we fear a new form of life nearly as much as we fear the possibility of artificially intelligent creatures making moral decisions in our stead.
Self-driving cars could become the norm and greatly reduce the number of traffic fatalities – but we do not have faith that they will they choose to protect their passengers, even if it means taking another life.
We do not trust computers to filter out hate speech from social media platforms without creating unreasonable curbs on free expression by imperfect humans.
We do not want a robot performing surgery, even if it is more precise than its human counterpart, because we want the surgeon to act with a sense of kinship to the person on the operating table.
We value human imperfection precisely because it can at times exceed the angelic precision of artificial intelligence. We are not fighting against the capacity of increasingly sentient machines, so much as for our continued agency as moral beings.
While there is much to fear and much to hope for when it comes to artificial intelligence, we need not fear losing the essence of our humanity, which resides in our ability for self-improvement.
Instead, we should create principles of governance to maximize the human potential to create angelic beings – while limiting the risks that these new beings could be programmed with our basest tendencies. Insofar as humanity is defined by the ability to make moral choices, these will be amplified, not supplanted by machine learning.
What changed in machine learning this year was not a specific leap in artificial intelligence, but in our awareness of its tremendous potency. As we continue to bring a new life form into being, may we use it to magnify the goodness of our own humanity and work evermore to make that goodness even better.
Ken y’hi ratzon.