New! Become A member
Subscribe to our newsletter
Interview

The Muslim tech professionals putting Faith at the forefront

Introduction

May 2023 saw the launch of a first-of-its-kind festival for Muslims in the tech sector: the Muslim Tech Fest in Mayfair, London, UK. The festival is the brainchild of Muslamic Makers, a community helping Muslims break barriers and finding role models across the tech industry since 2016.

With a network of over 2,000 professionals and with chapters in Manchester (UK), Toronto (Canada) and Boston, MA, Muslamic Makers offer free events, workshops, scholarships, career support, and help founders connecting with investors.

I came across Muslamic Makers as part of my research at the University of Warwick on grassroots faith-led organisations in Europe and North America. While many in the tech sector may consider faith to be nothing more than cultural heritage, the Tech Fest “AI Ethics Panel” demonstrates that for some, faith can be a framework guiding the direction of the development of AI.

The panel, moderated by Junaid Butt (Research Software Engineer at IBM Research), included:

  • Yusra Ibrahim, Software Engineer at Google,
  • Nayur Khan, partner at QuantumBlack (AI by McKinsey),
  • Aleena Baig, undertaking a Masters in AI Ethics & Society at Cambridge, and Senior Product Engineer at Suvera
  • Ahmed Wobi, Co-founder of Tonus Tech, a start-up augmenting mobility through machine learning.

Community Involvement in AI

Junaid Butt opened the discussion with the question: Why is it important for everyone to get involved in AI? What are the ethical considerations important when it comes to AI and what are the ethical considerations for minorities?

Aleena Baig pointed out that the public is generally unaware of the dangers of AI. A large portion of media produced about Muslims has been quite negative since 9/11, and it is concerning that ML systems will be trained on this data. This might have negative consequences on people’s lives, such as who can access healthcare and education, or who gets longer jail sentences. Baig argues that it is the responsibility of everyone to identify these problems and address them. Ahmed Wobi described how in his healthcare start-up, Tonus Tech, it is important for the sampling to be as broad as possible. This is especially important since human bodies are very diverse by nature and any bias could negatively affect treatment outcomes. Nayur Khan argues that machines are not biased, but rather humans are. The human factor is found in who captures the data, what data is collected, who builds the algorithms, and who tests and validates them. As a consequence, it is important to get as much representation as possible across the whole process. Khan argues that the Qur’an and other texts provide an ethical framework on many AI-relevant issues, such as how to conduct business and treat people fairly. Khan emphasizes that the unique ethical framework of Islam needs to be considered in wider ethical discussions. Yusra Ibrahim concluded by stating that people need not fully trust the outputs from ML agents, and that such outputs are first and foremost a human output coloured with whatever colour it was born into. She stressed that it is important for people to understand the technology and how can it be trusted.

The Future of AI Systems

Junaid Butt then considered what an AI critic might ask: What is the difference between AI produced by Muslims, and not produced by Muslims? What functional similarities and differences would be exhibited?

Yusra Ibrahim pointed out that any robust AI system should not be designed exclusively by Muslims only as it would show bias. Ibrahim argues that any AI system needs to be designed by a whole community of different opinions and cultures. Nayur Khan gave a contrasting opinion. Khan considered the implications of a group of non-Muslims creating a Muslim-specific app like a “Fatwa GPT”. Such a system would need Muslim scholars, jurists, and theologians for testing and validation. Khan argues that the problem of fairness and representation goes beyond religion, citing the algorithmically-generated student grades in the UK following the COVID19 pandemic. These ML systems tended to enhance the grades of private-school students and decrement those of state-school students, primarily due to a lack of diversity in the training, testing, and validation process1.

Focusing on the human aspect of ML development, Junaid Butt asked: If the problem is not computational, but rather societal, do we need to fix society first before we can attempt to fix the algorithms?

Aleena Baig replied that it is important for the public to decide where, how, and when AI enters into our lives since it is not inevitable and people are able to choose. Nayur Khan recalled how search engines have continued to evolve, pointing out that even a few years ago a Google image query for “CEO” would return pictures of white males. This has now been changed since people have spoken out and advocated for themselves and others. Ahmed Wobi referred to an a recent arxiv publication2 pointing out that researchers are now able to identify and quantify the risks of LLMs. At the same time, addressing fairness is a difficult challenge, especially as the environment of AI is continuing to rapidly evolve. Yusra Ibrahim stressed how it is important to represent different opinions since homogeneous thoughts and opinions are neither fair nor representative.

In terms of practical solutions, Aleena Baig argued for greater research into “explainable AI”. Large language models (LLMs) like OpenAI’s ChatGPT return “things shaped like answers” that might or might not be correct. It is therefore important to understand what informs AI decisions. Ahmed Wobi added that it is also important to ask whose responsibility is it to bring awareness on the risks associated with AI. Should this responsibility lie with governments or corporations? Yusra Ibrahim replied that corporations should be aware of the limitations of their software and provide end-users with a disclaimer. According to Nayur Khan, even experts in the tech sector should get involved by utilizing open source code as an alternative LLMs run by large corporations. This carries the potential to democratise tech within smaller teams and not avoid systems designed by tech giants.

Junaid Butt then asked about how “foundation models”3 differ from previous models, and how they should be defined.

Aleena Baig replied that although LLMs are not especially new, the novelty came from increased access to large computational power. Nayur Khan added that foundation models are just different in scale to previous architectures, and models like ChatGPT represent only an incremental step. These models have also been pitched differently to the public. Khan mentioned that while previous models were more task-specific, foundation models are more general and need more fine-tuning. Yusra Ibrahim added that newer models learn from queries they have never seen before, however it is important to keep track of the developments so we understand the process.

The panel concluded with various ethical concerns and questions starting with the replacement of software engineers by AI systems. Aleena Baig replied that AI is unlikely to take the jobs of software engineers, to which Nayur Khan responded that AI will make lots of tasks easier. Khan further raised concerns such as how will our data be used to train the systems. Yusra Ibrahim added that some key questions remain about whether the technology will be limited to a few major players, and if so, what kind of ideas will they reinforce or distribute through the information they retain.

References

Adam, Karla. “The U.K. used an algorithm to estimate exam results. The calculations favored elites.” The Washington Post (2020).

Kanti Karmaker Santu, Shubhra, and Dongji Feng. “TELeR: A General Taxonomy of LLM Prompts for Benchmarking Complex Tasks.” arXiv e-prints (2023): arXiv-2305.

  1. The term “foundation models” has been ascribed to recent LLMs which can be readily fine-tuned for specific tasks. The pre-trained baseline model is a “foundation” for further training. However, fine-tuning on pre-trained neural network architectures has been a part of deep learning for almost as long as the architectures themselves.

William Barylo

is a sociologist, photographer and filmmaker and postdoctoral researcher in Sociology at the University of Warwick, UK where he studies how young Muslims in Europe and North America navigate race, class and gender barriers from a decolonial and restorative perspective. William also produces films on social issues to disseminate knowledge to wider audiences, including documentary and fiction films to equip students and community organizers with practical tools for navigating the world, and help businesses nurture better work cultures. He holds a PhD from the School for Advanced Studies in the Social Sciences (EHESS) in Paris, France.

Leave a Reply

Your email address will not be published. Required fields are marked *

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter