Hi Muhammad. Please tell us a bit about your background and what has motivated your interest in AI?
I have had a keen interest in foundational questions of life and the cosmos since childhood e.g., why are we here? what is the nature of life? how is intelligent thought even possible? etc. The possibility of creating a machine that can think like humans was thus part of my fascination with foundational questions of life.
I grew up in Pakistan and it was in the sixth grade that I started reading about Artificial Intelligence and the philosophical debates in Cognitive Science. This was in mid 1990s and I had just been introduced to the internet which proved to be a valuable resource. Another crucial influence was that my father had a book company which imported technical books from the West, this essentially gave me access to many books while growing up which cemented my interest.
The penchant for AI has stayed with me ever since. I did my PhD in Computer Science from the University of Minnesota where my thesis was on using machine learning for modeling human behavior in massive online games. Afterwards, I worked in the area of applied AI and applied machine learning in multiple domains across many industries — until I started working in healthcare a few years ago where I have finally found my calling in AI. My current research in AI is thus informed not just by trying to address the foundational questions described above but also how AI can be used to reduce suffering and improve people’s lives, hence the interest in healthcare AI.
And what brought you to the Seattle area?
I moved to Seattle in late 2015. I was living in Maryland at that time and I had multiple job offers, including the best one from Groupon with two options – either move to Seattle or to the Bay Area. Seattle was an obvious choice for me, given that the Bay Area is too expensive, and the Pacific Northwest is known for its unrivaled natural beauty.
In addition to your work in computer science at the University of Washington, you’re the Principal Research Data Scientist at KenSci. Tell us about KenSci and your work there.
KenSci is a healthcare in AI company based in Seattle which is a spinout from University of Washington Tacoma. The company was founded by Samir Manjure, a longtime Microsoft executive and Professor Ankur Teredesai who was also the founding director of Center for Data Science at UW Tacoma, who saw the need for such an enterprise in healthcare.
KenSci is focused on addressing the twin challenges of improving healthcare outcomes while simultaneously lowering costs. The AI platform developed by KenSci analyzes billions of patient records which may be in the form of electronic health records, hospital charts, billing statements, and more, and find patterns of interest or make predictions. These predictions and patterns are then used to extract actionable insights e.g., to identify patients with diabetes and recommend actions that can potentially lead to risk reduction, flag patients with high mortality risk, reduce risk of hospital readmission etc.
I think there is something noble about what KenSci is doing; I was working at Groupon on interesting problems like price optimization, an NLP system for review moderation etc. but I felt that something was amiss. It was a quote from Jeff Hammerbacher, a Facebook Engineer, that inspired me to go work for KenSci, “The best minds of my generation are thinking about how to make people click ads.” Working at KenSci gave me a greater sense of purpose, and I consider myself privileged to work with really wonderful people at KenSci, who are the embodiment of what it means to be dedicated to improving the human condition and saving lives.
The other motivation was witnessing the last days and death of my father. If AI could have been used to gain better insights into his condition, then perhaps my family would have been better prepared.
My work at KenSci is focused on using AI and machine learning to create solutions in healthcare which are characterized by Responsible AI. This corresponds to AI models and algorithms that are unbiased and fair e.g., do not discriminate against an ethnic or racial group; explainable i.e., they are not black boxes and the models themselves tell us why they are making predictions; reliable i.e., good enough to be deployed for intended use cases, robust i.e., resilient in face of missing data with graceful degradation etc.
You’ve been working for some time on a rather unusual project: an AI of your father with which your young daughters can interact. What prompted this project?
The idea first materialized in my mind when my brother called me to inform that the doctor has just told him that our father had only a few days left to live. At that moment I realized that if I have kids one day then they won’t get a chance to interact directly with my father and not really get to know what a wonderful person he was. This was also based on my own childhood experiences as I only have vague memories of my grandparents since all of four of them passed away before I was five years old. The thought that came to my mind was that even though my father may not be able to interact with my children, I may be able to use contemporary technology to create a system that acts like my father in certain contexts. Although this won’t be a substitute for having a grandfather, it may give them a better idea of what kind of a person he was. I did not have kids when my father passed away but now that my older daughter is four, I am seeing this as a new reality coming to fruition.
How far along are you? What can the AI of your father do at the moment?
I have created a text based conversational AI system that can, on certain topics, have lively conversations like my father. It can be quite convincing at times. However, there are certain topics on which the AI does not perform as well and that is when it becomes clear that one is interacting with a simulation and not a real person.
That said, I am currently working on integrating voice with the system i.e., adding the ability for the AI system to talk in the voice of my father. The main challenge in adding voice has been adding intonation i.e., the same words uttered by a person may mean different things if the intonation is different. Human conversation and context are embedded in intonation which is non-trivial to capture at times.
Tell us about your long-term hopes for the AI of your father. What will it be capable of, and what do you hope it does for your daughters?
My hope is that the AI will interact with my daughters over the course of many years and allow them to form positive ideas and have positive experiences of their grandfather. I do not envision this as an end product but rather something that may evolve over time given that my hope is that my daughters will also change for the better over time. I envision that not only will the AI be capable of answering questions about my father but may occasionally help my daughters navigate their lives.
There are of course limitations to such an AI about which I hope to educate my daughters over time. They may miss out on some things like playing with their grandfather, but they may have something unprecedented. Imparting the concept of artificial agents that can mimic humans is a difficult concept for small children, but it is something that can gradually be socialized. After that it may be uncharted territory.
You’ve said “this project is fraught with ethical and moral dilemmas — to which I do not have all the answers.” What do you believe are the most important of these dilemmas?
It is difficult to choose one issue as the most important dilemma but if I were to choose one issue then I would say that it is how the introduction of these technologies on a mass scale is going to fundamentally transform society. Imagine a world thirty years from now where almost everyone has a version of a deceased relative. As such artificial entities multiply over time and become increasingly embodied, people may start spending more time with AI while neglecting the living. Add to this mix, the ability to edit’s an AI’s personality and one has the perfect recipe for societal alienation, i.e., since the AI is by its nature code that can be manipulated, one could even edit the personalities of the deceased’s AI. One could just edit out the unpleasant parts of the personality and create a “perfect” spouse, parent, uncle, nephew, etc.
For me, these two scenarios call into question the original premise of having simulations of the deceased. Do we really want to mass produce such AI that will lead to a society where the (simulated AI of the) dead outnumber the living? We have to weigh this against the benefits that such technologies can help in the bereavement process. That may be a tough act to balance, but a moral dilemma nonetheless that we as a society may have to address.
Finally, I know the AI project regarding your father has caused others to ask you about the “Be Right Back” episode of Black Mirror. In what ways is your effort similar to, and/or dissimilar from, what we see in that episode?
The Black Mirror episode came out in early 2013 and my father passed away later that year. I was actually not aware of the episode when I first thought about creating such an AI system. When I later found out about it, I was surprised by the many similarities with the episode and the real-world project that I was working on. My effort and the narrative in the Black Mirror episode both have their genesis in personal loss and finding a way to cope with it.
I think the primary difference is that the main character Martha initially look towards the AI system as a replacement for her boyfriend, but for me this is a way of remembrance – an interactive memento of sorts. Also, in my case I am not the primary audience for the system; it is my daughters who are the main interlocutors for the AI system. The idea is not to give them their grandfather but rather give them a glimpse of who he was.
In watching that episode, it seemed there were important differences for Martha in interacting with the embodied vs. disembodied AI of her late husband, Ash. How do you see that issue? If it were possible, would you want your daughters to be able to interact with an embodied version of your father’s AI?
There are orders of magnitude differences between text-based interaction vs. voice based vs. embodied interaction. I think that this continuum reflects the progression of memory keeping technologies that have been available to the populace at large especially over the last 170 years.
Until about the late-19th century most people could not see what their parents or grandparents looked like when they were young. The invention of photography changed that. Similarly, the proliferation of the CamCoders in the 1990s allowed the public to save and retrieve memories of their loved ones in the form of videos, the proliferation of smart phones in the last decade has been a game changer.
In my mind, the usage of interactive AI systems like the one that I have created is a continuation of such technologies rather than a break from the past. Embodiment of such systems does appear to be the logical next step in this evolution of memory storage and retrieval.
I have wrestled with the idea of having my daughters interact with an embodied version of my father’s AI. Although that technology is still a few years away, interacting with an embodied version of the AI might create false a impression of interacting with the real person. This may be especially true for young children who are more impressionable. That said, if they were much older then I might change my mind. Humanity has to be vigilant in dealing with technology so that we do not get permanently enchanted by the artifacts of our own creation.