New! Become A member
Subscribe to our newsletter
Interview

Interview with Advisor Rabbi Daniel Weiner

In his sermon on Rosh Hashana, the Jewish new year, Rabbi Daniel Weiner chose to speak to his congregation at Temple De Hirsch Sinai about the Jewish perspective on Artificial Intelligence (listen here). He says that Judaism and other faiths do not provide an answer to the many philosophical and moral questions posed by the development and proliferation of AI, but rather they give us insight as to how to live and thrive with AI already out in the world. Rabbi Weiner was kind enough to delve deeper into this discussion and talk about some of these insights. This interview has been edited for brevity and clarity.

Interview

Why did you choose to talk about AI and machines on Rosh Hashana?

It is the most compelling issue that will have a big impact on our culture. It’s my role as a Rabbi from a liberal tradition to share the ways this timeless wisdom can impact it. Rosh Hashana brings a wide audience- everyone is there, and AI is on everyone’s minds at the moment.

As someone who has never heard your sermons before, I was fooled by your introduction written by ChatGPT. Do you think you will ever be out of a job? Can you concretely state what ChatGPT was missing when it wrote your introduction?

Technology will always get better and better. Even with the depth and breadth of algorithmic learning and unimaginable amounts of data I think there will always be something that is a tell for the more astute audience, but maybe that is naive. I could probably tell the difference between a good sermon and ChatGPT. Something is different in the oratory and creativity; there is a soulfulness there that is still not reproducible. Rabbis won’t be out of jobs, but there will be some lazy ones. The question is will people want that, and who will notice?

Does the language of AI matter? For example, terms like neural networks, magic, black box, soul-like, etc.

As AI develops, we will need to develop a new vocabulary to go along with it. This will call into question our definitions of sentience, consciousness, and life itself. We tend to bifurcate science and faith 1, but this will force us to find language that traps both realms and becomes a synthesis that is more than the sum of its parts. We need a whole new language and philosophy of humanity. Science fiction has been “prophesizing” this for a while, and AI and faith will help both conceptually and in application.

Why is uncertainty about AI frightening when we also have a lot of uncertainty about God?

There is a key difference- fallible humans are producing AI. We humans don’t have a great track record of restraining and thinking before doing. Look at atomic bombs as an example. Although there can be uncertainty about God, God is good and wants growth, life, and spiritual progress. Human intentions are not always good and don’t always match God’s intentions.

AI is the same script as the pursuit of the atomic bomb or the space race, yet it is different because it pushes us to ask questions about ourselves and the essence of what it is to be human and have a soul. Why is this bad? Couldn’t this be a new avenue for people to find religion?

It’s good to be asked those questions. The progress of AI is happening to us; there is no choice. Sometimes things happen for the wrong reasons. There is something important about being pushed to reflect on basic humanity, but it’s uncomfortable to do so under the gun. It’s kind of an adversarial push, and AI, deepfakes, and the tenuousness of truth and fact amplify that exponentially.

Judaism’s first teaching is “the need to put matters of the mind and spirit over our obsession with the material world” (sermon). How does this advice differ in meaning for those pursuing the development of AI and those using it?

I don’t know if I know. Those producing it see themselves as pioneers and explorers of intelligence and reality. They are creating the next generation of breakthroughs. Users of AI have a much more consumerist mindset. They are not reflecting on it in the same way. This is similar to what we have seen with social media: everyone used it without forethought leading to higher levels of depression, anxiety, and loneliness. With hindsight, we can see the damages more clearly and now we are backpedaling. Judaism rejects the material world as the end of human progress and a form of idolatry.

What are the boundaries of our pursuit of knowledge?

Knowledge is value neutral. Judaism says that knowledge should produce good in the individual and bring good into the world. This is the paradox of the Garden of Eden: God didn’t want for human beings not to have knowledge, just not to have it immediately. Knowledge needs to be experienced, absorbed, digested, and used through the lens of ethics. This is also shown in the revelation of the Torah at Mount Sinai; knowledge and wisdom went through human agency and Rabbinic digestion. Revelation is ongoing, unfolding, incremental, and generational.

If AI is pushing us towards questioning ourselves, isn’t that similar to the goal of going into the wilderness, “away from impersonal cities and arrogant rulers obsessed with the technology of the day” (sermon)?

How do we access authentic reality? This does not mean rejecting the material world, but recognizing that the natural world is different from it. We are afraid that AI will become so good or real that we won’t be able to discern the difference. What will be lost by this? Will there be consequences from not having these distinctions? Will we lose an authentic part of ourselves when our human experience is mediated both biologically and technologically? We need to draw a line where technology goes beyond functionality, as we have with technology like cochlear implants.

AI pushes us to question the “why” and not just the “what”. We are pursuing the “why it works” everyday as researchers. What are your thoughts on this?

The people who are creating these systems have a responsibility to limit the abuses and educate the public. They have a deep ethical responsibility to humanity to think about the impact of it. Consumers are going to just use it and not think as much. This is a situation in which Judaism gives us insight as to how to live in a world with AI. Judaism encourages us to discuss these things and take the Sabbath as a time to be contemplative, away from distractions.

How do we distinguish between the real and the fake, such as with deepfake images?

Right now, the only constraints are voluntary ones. We need self-regulation, which relies on ethical behavior. We could have watermarks for AI content to let you know that it’s AI generated, for example. However, regulations are easy to abuse, and those who do so need to be punished and marginalized. We have already enacted similar systems for regulating cloning and nuclear weapons. Without self regulation we are lost, and the time to do this is now. We need an international ethics group to continue these discussions and enforce regulations.

Acknowledgements

A big thanks to Rabbi Daniel A. Weiner for taking time for this interview. Thanks to Dr. Mayla R. Boguslav for hosting the interview and writing the questions, and thanks to Joshua Mendel for editing and proofreading this article.

 


Dr. Mayla R. Boguslav

Is a dedicated post-doctoral mathematician who specializes in biomedical research at Colorado State University in Fort Collins. Currently a postdoctoral fellow in Dr. Michael Kirby’s lab, she is keenly interested in both collaborating with and studying DSRI (Data Science Research Initiative). She is also learning about and working with Veterinary health records.

A notable facet of Mayla’s background is her deep understanding of Jewish theology and traditions, rooted in her undergraduate degree from the Jewish Theological Seminary in New York. This, combined with her hard science degree from Columbia, equips her with a well-rounded approach to scientific endeavors, especially when deploying advanced AI tools in research.

Her graduate advisor was Larry Hunter, our esteemed advisor, at the University of Colorado Anschutz Medical Campus in Aurora.

Leave a Reply

Your email address will not be published. Required fields are marked *

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter