Can AI Be a Link in a Chain of Tradition?

Can AI Be a Link in a Chain of Tradition?

Artificial intelligence provides opportunities for solving previously unsolvable problems in many areas by allowing computers to bring together insights from widely divergent perspectives and finding connections that were invisible to humans. At the same time, A.I. introduces numerous risks to society, presenting a range of issues when it is used for identifying and evaluating people. For example, facial recognition systems frequently work substantially better for men than women and better for light-skinned than dark-skinned people — crucial aspects when used for law enforcement and even for innocuous purposes. Automatic systems that determine who should be let out on bail or parole can eliminate biases caused by a judge’s discretion, but they also bring in other, invisible biases based on past patterns. Such systems aren’t “deliberately” biased because their creators set out to make them that way; they’re biased because of limitations in technology and human awareness that aren’t adequately accounted for.

Here, Jeremy Epstein and Dr. Rebecca Epstein-Levy explore the ethical problems and potential solutions, and how Jewish thought – in particular, the powerful emphasis Jewish texts place on exegesis, accuracy and proper attribution – can aid in evaluating the codes of ethics and bill of rights that have been proposed around AI systems.

(This post is part of Sinai and Synapses’ project Scientists in Synagogues, a grass-roots program to offer Jews opportunities to explore the most interesting and pressing questions surrounding Judaism and science. “Artificial Intelligence and Ethical Biases” was a panel held on March 17, 2024  as part of Kol Ami – The Northern Virginia Reconstructionist Community’s “Science Meets Judaism” series of events.)

Read Transcript


Add a Comment

Your email address will not be published. Required fields are marked *