A new book by one of Stephen Hawking’s primary collaborators, Leonard Mlodinow, starkly raises the question posed in the title. In his book published this year, Emotional: How Feelings Shape Our Thinking(Pantheon), Mlodinow provides a well-written survey of research on human emotions and how they function in human thought. Along the way he deals with an issue people of faith may especially be interested in: the (in)adequacy of AI to take into account the range of emotions that truly makes human thinking transpire. The book in itself has value for all of us interested in the human condition, even for those of us who can’t agree with his analysis of its implications for computer technology. Let me take you first on a tour of his vision of the emotional world and then we’ll get to why I believe it matters for AI and Faith.
Mlodinow wants to undercut myths about human emotion. His basic thesis is that neuroscience has shown us that human information processing cannot be divorced from emotion (p.9). Although there are instances in which emotion is counterproductive for human flourishing, and the mind must repress or redirect them, in general emotions contribute to our thought processes, helping us make the most of our mental processes and aiding our flexibility (pp.200-201). It seems that emotions are not just rooted in the older, more primitive, reptilian parts of the brain. Because of the anatomical overlap of the brain’s layers, the generation of emotions does not seem based just in some areas (pp.18-19).
Research indicates that emotions are connected with the core affect of the body, a mental state which provides information about the body’s general sense of wellbeing, based on data about body systems, external events, and thoughts about the state of the world. They are correlated with activity in the orbitofrontal cortex and amygdala of the brain. The core affect tells the body it is doing well or sounds an alarm. Emotions emerge when we become conscious of these dynamics. As part of humans’ actions to overcome entropy (the tendency to grow in disorder over time until a thing’s demise), the core affect wards off threats to disorder of our cells (pp.44-45).
Having made obvious the safeguarding effects of emotion, Mlodinow proceeds to analyze specifically how these dynamics function. He contends that emotions are reactions to our circumstances. They guide our thinking. And even though they dissipate, they contribute to our individualized emotional profile which makes us react to life in different ways (pp.158-159). As such, emotions play an essential role in the power to determine our choices, advancing our urge to act (pp.147-148). Beyond contributing to human flexibility and to longevity by warding off entropy, Mlodinow asserts that social emotions are the basis of our morality (pp.79-80).
Emotions also seem related to pleasurable brain chemicals like dopamine which, as I have suggested in a previous article (“Do Digital Tools Threaten Learning, Spirituality, and Well-Being?”, AI and Faith Newsletter ), present a real challenge to developing an AI which we can truly trust. It appears that dopamine in particular can also stimulate desire, and in that sense facilitates our energy to pursue things (pp.122-129). I’ll return to this point in closing, where I analyze what Mlodinow’s analysis entails for AI.
For all his praise of emotions and their contributions to human reason and the quality of life, Mlodinow is not naïve. He makes clear that although emotions have aided homo sapiens in evolutionary processes, some of the most ancient emotions may no longer be appropriate in modern culture. Emotions and actions appropriate to fending off a predator are not very helpful when driving a car and interacting with a difficult boss (pp.38-39). Thus, it becomes important to manage our emotions (p.187).
Emotions which do not result in happiness also should be managed, though Mlodinow wisely and accurately notes that we all have different happiness set points (p.178). He observes that emotions are spread from person to person (pp.184,185). And so his advice is that happier people spend more time with family and friends, express gratitude, and engage regularly in acts of kindness (p.178). This fits the neurobiological finding that love is good for brain chemistry, as your brain gets saturated with that good-feeling chemical dopamine (p.180). Mlodinow contends the mind can overcome emotion (practicing a kind of Stoicism ).
In harmony with his claim that emotions are the basis of morality, Mlodinow notes research indicating that religious awe, the feeling of being in the presence of something greater than yourself, motivates a broadening of focus from narrow self-interest to that a larger group (p.88). That dopamine is secreted in the brain in spiritual activities, fits the previously noted observation that dopamine facilitates our desire to act.
With this background in the creation, behavior and benefits of emotion, let’s consider what Mlodinow’s analysis entails for the development of computer technology and AI. He notes that just as emotions can spread from person to person, so they can spread through the Internet. It seems obvious that Internet content can change human emotions (p.186). Mlodinow views computers as “apathetic”. He contends that the most sophisticated computers can react to a myriad of stimuli, but they cannot initiate independent thought and action, because they are limited by their programming. Bereft of feelings, computers are currently unable to assess novel situations and decide what to do (pp.147-148), and so fall well short of thinking like human beings. Has Mlodinow identified here what may be lacking in AI computers programmed to think like humans?
Some experts on “affective computing” might contend otherwise. The pioneering work of experts like Rosalind Picard of MIT has pushed technology in the direction of recognizing, understanding, and even expressing emotions. Stephen Kleber contended in the Harvard Business Review in 2018 that machines now have the ability to interpret human emotions (even better than many humans can) and others can mimic or even replace human-to-human interactions. But Mlodinow doubts that the current affective computer will really allow a computer to think in such a way that its judgment is guided by emotion, with the attendant flexibility to adjust.
For people interested in introducing moral judgment to computer analysis, including concepts of benevolence and compassion, AI that lacks emotions such as love and benevolence, and that does not allow for monoamines and emotions associated with spirituality and transcendence, will fall far short in its decisions and judgments of the capacities that humans ordinarily factor into their decisions every day. Of course, to many advocates of algorithmic decision-making, eliminating such emotional factors is the whole point. But as long as computers are dependent on programmers inputting such values, and programmers are either unable or unwilling to include them, it seems to follow that computer decisions will perpetuate secularism and the status quo. Incorporating such values either directly through programming or indirectly through the development of sophisticated affective computing will be an important step toward computers that truly think like we do, including the kinds of emotional benefits that Mlodinow writes about.