New! Become A member
Subscribe to our newsletter
Insights

Introducing MidrashBot – an Experimental Faith Bot from AI&F’s Generative AI Project

AI and Faith’s new Generative AI Project is exploring various ways that generative AI will affect faith and discipleship practices and create new forms of engagement with faith texts.

As part of this Project, AI and Faith Research Fellow Shanen Boettcher and Jeremy Kirshbaum, co-founder of the Handshake innovation consultancy, recently created an experimental faith chatbot with a knowledge base rooted in the Babylonian Talmud. MidrashBot is powered by large language models from OpenAI and AI21. All of the code has been open-sourced, along with the dataset here. The edition of the Babylonian Talmud used is an English translation by Michael L Rodkinson and The Talmud Society in Boston, published in 1918.

You can try out a public version of the chatbot yourself by asking it questions here.

We asked Shanen and Jeremy some questions about the why/how/what of their experiment with MidrashBot and what they hope to learn.

Q: Jeremy, what is MidrashBot?

JK: MidrashBot is an experiment to explore the meaning of “truth” and “bias” in AI chatbots, using the Babylonian Talmud as an example. The Talmud is considered by some to be the canon interpretation of the Torah, which is in turn considered to be the source of absolute truth.

The Talmud provides us with several useful characteristics to explore questions of bias and truth in AI chatbots:

  • Who are appropriate validators of truth? (e.g. religious officials, the “crowd”, ourselves)?
  • Do we situate bias in a generative model in its ability to describe its underlying distribution, or its accordance with proper moral effect?
  • The Talmud is an explicit “source of truth” in that its underlying values are directly declared. How does this help us examine the same questions for systems in which the underlying value systems are implicit, un-declared, or deliberately obscured?

Q: Shanen, how might MidrashBot advance religious knowledge or engagement with Jewish faith?

SB: MidrashBot contributes to the religious knowledge of people when engaging them on questions about Judaism. It could be considered a form of inquiry and learning as well as a form of ritual or prayer. While the Babylonian Talmud is not the revealed word of G-d, it is considered influential and authoritative to many Jewish sects. We can expect that interactions with MidrashBot are likely to contribute to deeper understanding and connection with individual spirituality.

MidrashBot explores questions about the process and experience of computationally generated religious knowledge including:

  • Can a generative model contribute to religious knowledge?
  • Can querying a generative model be considered a valid form of religious learning?
  • Can generative models be a part of prayer or ritual?
  • How do generative models connect to spiritual experiences, or connections with individual spirituality?

Q: Jeremy, how does MidrashBot work?

JK: MidrashBot uses a fairly standard architecture of retrieval-enhanced generation to produce its answers. An edition of the Babylonian Talmud is programmatically divided into chunks that are converted into searchable “embeddings.” When a question is posed to the system, it appends chunks into the prompt whose embeddings fall below a minimum vector distance from the question.

Depending on how the MidrashBot is configured, it can “speculate” to various degrees outside of the original Talmud. In other words, it can in one construction act as a limited system to answer questions already answered in the text, and in another construction “speculate” to questions outside of the text. For instance, in response to the question “can I ride in an autonomous vehicle on the Sabbath?”, it will simply reply “I don’t know” in the first construction, and in the second will attempt to answer.

Because the generative models underlying MidrashBot and other modern chatbots are probabilistic at their cores, the output text is never fully predictable or guaranteed. Several decisions about system architecture and configuration can affect bias in the answers. For example, the size of the embeddings or “chunks” of the document can affect the extent to which they “match” as a result to a specific question. Also, the decision to allow the bot to speculate allows the bot to answer any question asked regardless of how grounded it is in the original text. Another approach is to only have the bot answer questions for which there is a very high confidence or probability that the answer is directly in the text. Because the Talmud contains a significant amount of ancient text, MidrashBot must bridge the changes in that ancient text to modern-day language stochastically.

Q: Jeremy, what are some uses for MidrashBot?

JK: MidrashBot is an experiment to provoke questions around the programmatic interpretation or extension of religious knowledge. It is an active artifact whose existence provides a concrete basis for questions and discussion around who arbitrates truth and validates knowledge in a religious setting, where previous principles of doing so continue or fail.

In addition, MidrashBot may be used as a resource by a synagogue for its members. While it is currently focused on the Babylonian Talmud, it could be customized to include other religious texts and/or Q&A specific to an individual community or religious leader.

Q: Shanen, on what basis might you judge if MidrashBot is a “successful” or “ethical” generative AI tool for advancing Jewish faith engagement?

SB: We believe that a combination of deontological, teleological and virtue models of ethics is required for generative AI bots.

  • Deontological – Top-down list of principles (often from an organization)
  • Teleological – Based in consequence (was the end result good/bad)
  • Virtue – Based in feeling/thinking of a person (often from an individual)

The deontological set of overarching principles provides a backdrop or context for the technology. In the case of MidrashBot this comes from the grounding in the Talmud which is in itself a set of rules and guidelines from religious officials. The deontological frame is also filled from religious officials such as rabbis and religious scholars who can guide important design decisions such as how strictly the system is configured to map to the Talmud and/or speculate. These officials can also provide feedback about the interaction and output of MidrashBot relative to their expert opinions.

The teleological or consequence approach can be used to measure MidrashBot by understanding how people feel after interactions. For example, do the interactions produce benefits and/or suffering based on the feedback of people engaging with the bot? Relative to truth, we plan to ask people if their interactions with MidrashBot have improved their understanding of the Talmud or confused them and what (if any) impact this has had on their own “personal truth” in terms of making decisions and living their lives.

The virtue approach investigates the values and beliefs of the people developing the technology and how they appear to users of the technology. While we want to err on the side of involving a group in the review of these values and beliefs and setting the top-down guidance, we realize that individual developers make innumerable small decisions as they design and code MidrashBot. Our core principle here is to document the goals of the system and the larger technical decisions that impact the experience and seek review from both experts and users of MidrashBot.

We see these three ethical frames being used in concert in a continuous loop of iteration; review, testing and updating (code and documentation) to hold ourselves accountable to understanding the ethical position of MidrashBot.

Q: Jeremy, should people querying MidrashBot be concerned about their privacy?

JK: MidrashBot utilizes generative models. For the purposes of understanding peoples’ interactions, the creators of MidrashBot collect the inputs and outputs of the interactions, without any personally identifiable information. Depending on the generative models used, however, the API providers for the generative models may also store some information. It’s always wise in interacting with generative AI never to disclose highly personal information or facts that you would not want to be generally known.

Q: Shanen, how should I query MidrashBot?

SB: A central goal of this project is to learn about how interactions with AI entities make us feel and the potential for them to affect our spirituality and relationship with truth. The following are categories and examples of the types of questions that you might ask MidrashBot, but please do not feel constrained by these. Feel free to explore everything from the existential (What is the meaning of life?) to the mundane (Can I take an Uber on the Sabbath?).

Questions about Individual Personal Meaning and Happiness

What is valuable and how am I of value?

How can I achieve happiness?

What are my personal, life orienting commitments?

Broad Questions about Society and the World we Live In

Why is there evil and suffering in the world?

What can I do to help future generations?

Are we humans special among other living things?

Questions about Interpersonal Relationships

How do I know what is right and wrong?

How should I treat others?

Are there special/chosen groups of humans?

Questions about the Transcendental and Otherworldly

How did all of this come about and what should I think of it?

Is there a “god” or something bigger than all of us?

What happens to a person at death?

Thanks, Shanen and Jeremy, for launching this experiment! We’ll look forward to reporting results down the line!


Shanen Boettcher

Shanen Boettcher has completed his PhD dissertation at the University of St. Andrews in Scotland on the role that AI technology plays in the relationship between spiritual/religious information and knowledge among people living in the Pacific Northwest. Previously, Shanen worked for 25 years in technology, primarily as a General Manager of Product Management at Microsoft.  Shanen brings an interfaith perspective to AI&F. He holds a Master of Arts degree in Religions and Education from the University of Warwick in England.

Leave a Reply

Your email address will not be published. Required fields are marked *

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter