Overview
Today we feature an interview with Ayatollah Seyyed Mohammad Ali Ayazi. Ayazi is a prominent scholar in Qom, the religious clerical capital of Iran. As an Ayatollah, he is granted the right to interpret the Qur’an independently and issue legal opinions (or fatwas) on various issues. He is an important advocate for human rights in Iran and favors progressive reform while opposing theocratic dictatorships .
AI can make robust predictions of human behavior. Is it beneficial to allow AI to guide and influence our behavior?
Predictive algorithms can be fruitful for human guidance, but we must keep several potential challenges in mind. First, AI applied in our society will be a turning point in the history of human intellectual life. The question is whether the ability of an AI to make decisions is compatible with human freewill. AI should not deprive mankind of its authority. The concern is that AI will put individuals in a passive decision-making mode that deprives them of determination and free will. Instead, AI forecasting arranges his life without him realizing it, like being on the train track that cannot avoid derailment. He is nudged to choose a predetermined option that based on biased information. As a result, man becomes a machine among other machines that make decisions for him. This places him in a multifaceted and bewildering complex in which his will and conscious choice will be meaningless.
Dr. Ali Shariati began a conversation about 50 years ago in his book Man and Islam known as the “Four Prisons” debate. Shariati argues that man has four predestinations, or “prisons”. The first is the prison of nature; that is, the genetic lottery. The second is the prison of historicism; that is, the influence of history, geography, and cultural assumptions. The third prison is sociologism; that is, the social status and cultural mores he inherits. Finally, the worst prison is the one a man creates for himself. The question remains whether AI will lock mankind within a fifth prison.
Algorithms are dependent on past performance to predict future behavior. If instead predictive algorithms had limited exposure to past performance and focused only on future rewards, prediction would become increasingly difficult. We do not know whether AI innovations will be compatible with the structure of human existence, personality, and characteristics. Perhaps, on the contrary, it creates something new that we cannot understand or reason about. Furthermore, training predictive algorithms creates an invisible governance, and we should be aware that this control rests with companies like Google, Microsoft, and Twitter. Information giants can guide and determine governmental changes invisibly, such as the events surrounding the 2016 US Presidential election. If extensive oversight arises from democratic governments instead, what role should the news media play?
When the industrial revolution took place, it drastically shaped society and culture. Studying our own history, we can analyze these social and behavioral changes across generations. When we reached the technological revolution, it once again brought about dramatic societal change. Today, there is a stark demarcation between generations who use mobile phones and social networks and previous generations who do not. The knowledge, information, language, and behavior of a generation that is dependent on this technology is different from previous generations. The religious and moral implications also require careful analysis.
As AI becomes more widespread, we will witness the changes it will cause in society. It is unclear whether the ever-growing amount of data will in fact allow AI to continue making effective predictions. I am optimistic that such an AI can be created and used for good, even despite humanity’s history of innovations which have damaged itself, such as the atomic bomb. Even for such an innovation as the atomic bomb, we witnessed its destructive capacity and found a way to control it. We have mitigated many dangers associated with nuclear weapons, although there remains the concern that a bad actor will take advantage of it. I am optimistic that even if AI creates a threat for some people, humanity will eventually control it. Even so, the threat of AI misuse will continue, and its predictive analysis of human beings could be dangerous.
Consider the example of football. Bookies familiar with the teams run calculations and determine bets, but their predictions are not always correct. Game-changing upsets are always possible, but are still surprising to everyone. Compared to football, AI makes predictions about situations far more difficult for humans to consider. It is concerning that, while an AI may predict a low probability of a certain outcome and the outcome therefore overlooked, this outcome may be devastating in the real world. Proponents of AI rely more on its potential to empower people. While AI can empower people, it is unclear how these algorithms can be compatible with ethics, spirituality, and social justice.
Smart robots can help humans make better decisions by monitoring and guiding them. Do you consider this kind of robot-human interaction useful (or even allowed)? Under what circumstances?
We must first specify the types of interactions between robots and humans, and how they fit within a framework of helping people. How does AI intersect with human decision making? What is the goal of the robot when helping the human? When large companies with distinct political and economic interests build AI, are these AI truly helping humans to make better decisions?
Considering this issue from our perspective, we see a capitalist system looking after profit and power. If there is no profit, companies will not operate. For example, Elon Musk’s companies have recently become an economic and political soft power. Looking towards AI, is it possible to instead build a model that provides production improvements while simultaneously supporting human flourishing?
Is it possible that companies may lead us to make poor decisions instead of good and useful ones? Large companies aside, it is possible for people to abuse these algorithms. Those lacking an ethical framework may use AI to derive predictive information with malevolent intent. These abuses have occurred in the past with other technologies as well. When the Internet became popular, some groups were formed to sell weapons and drugs. Some abused the Internet for other immoral purposes. Today, the world of sex has become a means to disfigure women’s dignity, and her body is an advertising tool within the capitalist system.
To what extent should AI algorithms be able to access private information? What human rights issues that should be considered with AI in this regard?
The protection of private information is the right of every human being. Principles of human dignity and social justice require that private information be kept private, even during the implementation of social policies. The principles of dignity and justice should be the basis of scientific and economic activities.
Regarding privacy, it is not clear to whether it is possible to plan and predict using such large quantities of information. Hacking can be used to access personal information. The issue of protecting private information is the right of every human being. In the case of organizations and companies determining the desires of individuals to drive monetary or social interests, this is a source of moral concern. We must also distinguish between a person’s public and private information. I am concerned about personal information being misused, especially since AI may abuse that private information. AI should be constructed to reduce the chance of abuse due to data misuse.
How can intelligent robots be used to understand the behavior of individuals and personalize jurisprudence rulings for him? What should AI know about each person to personalize a jurisprudential decision?
The use of intelligent robots for ijtihad and the inferential work to determine one’s duty is an important potential use for AI in a religious environment but would require a comprehensive evaluation of AI models. Consider a robot that can provide appropriate answers to all questions in the field of religious practices. The robot acts similarly to a personal doctor. When presented with a problem, the intelligent robot directs a treatment process and guides the individual toward recovery. In the context of jurisprudence, such a robot should be easily accessible and capable of providing comprehensive responses in a way that preserves the privacy of the individuals.
Intelligent robots that understand human behavior and personalize jurisprudence rulings can be made practical if they cover all aspects of legal rulings in the context of religion, as guided by votes and opinions within those religious communities. Jurisprudence rules should be formed on a moral and legal basis, especially rules based on individual and social ethics. All religions aim to strengthen specific spiritual foundations while helping others. Rules can relate to a person’s private behavior as well as his social behavior. Jurisprudence rules pertain directly to moral principles and seek to uphold these principles.
If an intelligent system is created that can offer advice to individuals, it could be useful in situations of mental crisis.
When considering the objectives of such AI models, a practical example AI systems or robots for medical applications. It is not the case that everyone with a headache is given the same prescription. Not everyone with a certain malady can be treated with the same medication. Medical diagnosis is also an important function for AI in medicine, and such models incorporate personalized treatment options for individuals. The same logic can be applied for religious rulings.
A lack of personalized rulings issued by jurists has been an issue for a long time. For example, suppose someone became a Muslim, but he was not prepared to accept his religious duties. When jurists made a ruling, they are to incorporate his condition and readiness. Regarding the expectation of financial aid to the poor according to his financial situation, jurists are to give more tailored instructions. If an AI could make assessments according to a person’s status and issue specific instructions to him, it would affect the development of religious practices in the private sphere. An AI assessment of status, along with individually tailored rulings based on the principle of expediency. Personalized treatment is fundamental in private jurisprudence, just like personalized medical care.
An AI should also be capable of suggesting specific treatments for moral crises, and mental problems, personal deficiencies. Regarding the study of religious literature, these tasks fall to a mentor. In Sufism, the Qutb will give instructions appropriate for various stages of spiritual growth. However, these decisions of jurisprudence must be based in moral principles. Akbar’s jurisprudence is the basis of rulings in Islam, and if a directive does not have moral implications, it is meaningless. Therefore an AI must provide a purposeful, self-aware, ethical framework to support every person. If this is the case, it will enhance our understanding in the field of jurisprudence.
On the other hand, AI need not be limited to the application of jurisprudence. AI should be capable of helping people, and providing guidance beyond practical moral decision-making. Actions of specific members of the body can be involved in decision-making, but when it comes to the mind and conscience, AI must be able to recognize the diversity of thought and belief. These AI should be capable of better understanding the conditions, lifestyles, situations, history, geography, and culture of nations so that interaction between the AI and human can feel natural.
In the Qur’an, we come to understand the problem that people are not of the same degree and capacity of faith. Everyone will have different levels and layers of their beliefs. It has been said that God asks each person to do his duty according to his ability. Therefore, AI can obtain information from that person and provide him with personalized religious content. A faith-based AI should be able to help a specific person of faith, and guide him in the practice of religious orders and moral excellence.
Acknowledgments
A big thanks to Ayatollah Seyyed Mohammad Ali Ayazi for his time to carry out this interview. Thanks to Sadegh Aalizadeh for hosting the interview. Thanks to Marcus Schwarting and Emily Wenger for proofreading, editing, and publishing this work.
References
Sven Weniger, Michael Marek, “Ayatollah Ayazi aus dem Iran: Gegen die theokratische Diktatur”, Deutschlandfunk, https://www.deutschlandfunk.de/ayatollah-ayazi-aus-dem-iran-gegen-die-theokratische-100.html, March 25, 2018.
Abrahamian, Ervand. 1993. “Ali Shariati: Ideologue of the Iranian Revolution”. In Edmund Burke and Ira Lapidus (eds.), Islam, Politics, and Social Movements. Los Angeles: University of California Press. First published in MERIP Reports (January 1982): 25–28.
Shariati, Ali, and Fatollah Marjani. Man and Islam. North Haledon: Islamic Publications International, 1981.
Footnotes
Dr. Ali Shariati Mazinani was an Iranian sociologist specializing in the sociology of religion. He has been called the “ideologue of the Iranian Revolution” .
For North American readers, football refers to soccer in this context.
Ijtihad is a legal term for an expert in Islamic law independently determining the solution to a legal question.
Literally translated from Arabic, “qutb” means “axis”. In Sufism, a Qutb is a spiritual leader with a divine connection to Allah.