For today’s #WhoWeAreWednesday we feature an interview with Dr. William Barylo. William is a Research Fellow in Sociology at the University of Warwick working on Muslim lived experiences in Europe and North America, author of Young Muslim Change-Makers, director of the documentary Polish Muslims and currently working on the action-research project The Diaspora Strikes Back which has been awarded a British Academy Postdoctoral Fellowship. More on: williambarylo.com.
How would you describe your experience with AI?
As a researcher in sociology my job is to be an observer and an analyst. Through my research I’ve been analysing people’s interactions with earlier forms of AI currently in place – machine learning algorithms we find on social media. Like many of my respondents, my personal experience as an end user with these has been disturbingly throwing me back to works of fictions I used to read as a kid.
Algorithms are supposed to mirror either our personal preferences or the society we live in at large. The problem with any reflection of the world we live in is that our world is far from perfect – and a few digital strategies can sometimes make predominant a flow of harmful messages coming, in fact, from a minority.
Anyone observing the functioning of these lines of code might have been surprised by how they feed us a specific yet homogenous vision of society revolving around material aspirations and polished aesthetics – with the consequence of validating us or not depending on the way we look, speak, behave, earn and what we aspire to. This is understandable when we realise that the main companies developing AI are either in the security or advertising businesses.
I therefore wonder if humanity can design a not-for-profit or not-for-security AI which does not aim to influence our choices or gatekeep basic human rights depending on our profile.
The very nature of human existence is found in its nuances. We are creatures that do not work in binaries but rather know how to assess circumstances and potentially make exceptions. Can an AI understand nuances, mercy, or compassion? If someone has to break the law in a matter of life or death, could a machine replace a whole human-led court process and deliver reasonable outcomes? If someone needs healthcare, can a machine replace a doctor who knows that an effective treatment for one individual would not necessarily work for somebody else?
How would you describe your faith background?
I describe myself as an Abrahamic monotheist following the message of Abraham, Moses, Jesus, and Muhammad without a particular affiliation to a church, school of thought, or sect. I would affiliate myself with people of shared values and ethics rather than based on metaphysical beliefs, rituals, or cultural background. I like to say that my church and my community are made of those people at the service of justice and harmony.
I have been raised in a traditional Polish Catholic family but only reconnected with faith much later in life. My personal and professional journeys led me to meet and connect with more people of Muslim background in Europe, Asia, and North America, which led me to appreciate Islam but also the fact that with every system of beliefs, people who understand their faith as an ethical framework (and not just rituals) are rare and precious. I was raised in France in a Polish-speaking family, I took German at school, studied English during my university years, and eventually learned Hindi and Urdu for my research travels. I developed a deeper understanding of how culture and context shape beliefs and society. Thanks to this atypical journey, I feel at home around Christians as well as Muslims around the globe and even more so with people who share the same values.
What led to your interest in the intersection of AI and faith?
Faith, the way I conceive it, comes with a strong moral framework and ethical compass for all aspects of life. Beyond belief, the way I understand faith is through the concept of stewardship (being a shepherd in Christianity, being a vice-gerent in Islam, etc.). Stewardship comes down to two questions. First, how are we using what has been given to us (in terms of resources, abilities, privileges, etc.) to serve others and everything around us? Second, are our words and actions provoking chaos or restoring harmony?
AI begets the question of power: who is AI serving? Is AI serving humanity or will we end up serving AI in a dystopian fashion? According to various research works and as observed by various groups and thinkers, the way AI is currently developed contributes less to improving justice and harmony versus improving the financial profits of a minority and the social control strategies of some governments. I understand that money and power are perhaps the main religions for many in our modern era. However this is why I believe that those holding different ethical frameworks must have a seat at the table and a voice in the conversation.
Why are you involved with AI&F?
Having worked on some videos for AI and Faith, it is always a pleasure to give a platform to the wonderful profiles in the team and give a visual presence to the initiative. AI and Faith serves the purpose of bringing a different ethical perspective to the conversations around AI with the help of engineers, managers, academics and community organisers of all faiths and none. While a lot of the conversations revolve around legal, philosophical, and technical issues, when we are talking about designing tools for society there is a need for a social and anthropological dimension. This is why, as a sociologist, I wanted to contribute. How do people interact with AI and what observable impacts AI has on our society are two fundamental questions that can help steer the development of AI in the direction of human flourishing. Aside from that, it always helps meeting knowledgeable people from various cultural and professional backgrounds with the same purpose.
How does AI&F affect your work outside the organization?
For some people at AI and Faith, I feel like I have visited family but come away with the fantastic feeling of returning home with more ideas than when I left. The community I find in AI and Faith helps me bounce ideas off people from various professional backgrounds and walks of life. It helps me build bridges between disciplines, networks, and social realities. That means I can include more varied arguments, deeper questions, and richer observations in my research.
What open problems in AI are you most interested in?
The most pressing current matters in AI are questions around algorithmic bias: who do algorithms include and who do they exclude? What do they feed on and what do they feed us in turn? There are lots of debates around where in the development chain these problems occur: is it at the data gathering stage, the coding stage, or the user interface stage? I believe that the roots of the problems and controversies we are currently observing are elsewhere. Just as friends influence each other or parents influence their children, our social, economic, cultural, and professional backgrounds will inevitably induce some form of bias. The question is not how to avoid it, but rather how to manage it and keep it in check.
I believe that more work is required at the intentionality stage: what are our intentions (as employed individuals and as corporate entities or research centres) when developing an AI? Are we uncritically following a chain of command for the sake of paying our bills? Or are we thoroughly thinking it through? Can we even afford the time to do so? Are the right structures and safeguards in place? If not, are we preparing ourselves to cope with the limited systems we produce while not becoming overly dependent? Again, are we going along with a chaotic flow or are we standing firm as stewards in the development of this new advancement?
A big thanks to Dr. William Barylo for his time to carry out this interview. Thanks to Emily Wenger for proofreading, editing, and publishing this work.