New! Become A member
Subscribe to our newsletter
Insights

“Choice” and AI

A number of recent books and articles have highlighted to me how important the concept of “choice” is in our consideration of the opportunities and risks posed by AI-powered applications and technologies.

For starters, there is the foundational question of whether we even make actual choices.  The deeply materialist, evolutionary-driven model that Yuval Harari serves up in 21 Lessons for the 21st Century (2018) says basically, no.  Free will and the whole idea of choice is fiction.  Instead, we are a mass of biochemically and genetically determined hot buttons that are pushed throughout our living days by the circumstances and environmental forces we encounter.  As Harari posits in his lesson on Liberty (at 47 – no irony apparently intended), “scientific insights into the way our brains and bodies work suggest that our feelings are not some uniquely human spiritual quality, and they do not reflect any kind of free will.  Rather, feelings are biochemical mechanisms that all mammals and birds use in order to quickly calculate probabilities of survival and reproduction.”

To me this “proves too much” as we used to say in the courtroom.  Even if it could somehow be proven true, what’s the point of seriously embracing this model?  It’s like those thought experiments about living in the Matrix.  It makes for entertaining science fiction and television, but it’s nothing you can actually live and flourish with.  If for no other reason than the preservation of sanity, I “choose” to reject this idea that I have no free will.  If I’m wrong, I won’t suffer for it!

The Elephant in the Brain: Hidden Motives in Everyday Life (2018) by software engineer Kevin Simlar and behavioral economist Robin Hanson similarly serves up a strong dose of evolutionary biology to make the case that everything we do, whether seemingly altruistic or overtly self-serving, is for the purpose of gaining improved  social status and the enhanced opportunities for successful mating and dominance that come with it.  In fact, they argue, we are best at this life game when we pretend or fool ourselves that our motives are otherwise.  Examining everything from politics to art to religion, Simlar and Hanson seek to show that our choices are far more about looking good than being good.  MIT researcher and writer Andrew McAfee rightly blurbs that this is the “most uncomfortable self-help book you will ever read.”

Simlar and Hanson do not contend we can’t make selfless choices but they have no particular suggestions for how we do so – beyond taking into consideration the elephant of our hidden, self-serving motives.  They do advise from personal experience that discussions of their work are likely to kill the mood of a dinner party conversation  – so best not to dwell on it in polite society!

While reading The Elephant in the Brain I couldn’t help but think how its assessment of human nature fundamentally aligns with the principles of Calvinist theology.  Five premises under the acronym “TULIP” comprise the basic theological principles of John Calvin, the 16th-century father of Protestant Reformed denominations like Presbyterianism.  The T is for “Total Depravity”.  Needless to say, not many sermons are preached on this doctrine in the modern church marketing era, especially among country club Presbyterians.  Indeed, you could say it is the “elephant” in Presbyterian churches as much as in broader society.  But here in current contemporary behavioral economics and neuroscience theory, we find it rearing its trunk and trumpeting afresh!  Hurrah, because we have abundant evidence across the Internet every day that depravity is indeed a substantial part of the human condition.  Fortunately, though, our theology does not end there but continues into a message of grace and the power to change.

But evolving themes in the conversations around appropriate AI suggest that we are not limited to a future as choiceless and selfish automatons.   That we end users of technology can and must exercise more choice in our uptake of AI-empowered technology is the message of two rapidly developing strains of conversation.

One strain concerns the idea that we will be called to make “ethical settings” when we acquire new technology.

AI Ethics version 1.0, as reflected in early stage anthologies like Robot Ethics (2012) seemed to incessantly revolve around the “trolley problem”   This is the easy-to-conceive issue of who should a driverless car be programmed to hit in an unavoidable collision: the mother pushing the baby pram, the old lady just starting to cross the street, or the tree by the road that would cause the car to miss the pedestrians but risk killing the car’s occupant?  It’s called the trolley problem because it dates to the trolley transit era when an accident-avoiding switch could be thrown at the last moment by the ethicist.  (This archaism and longevity resemble Asimov’s Three Laws of Robot Behavior which date to a 1942 short story.)

The early underlying assumption for the unavoidable car crash problem was that it would be the car manufacturer and its software engineers that would program the answer to this dilemma or the programmed learning from which the car itself would choose, with less or more transparency.  But more recently, the discussion has advanced to a consideration of shared responsibility, including the possibility that just as drivers can now choose “eco/normal/performance” carburation settings, occupants of future autonomous cars may be able to make “who dies” ethical settings.  See, e.g., Loh and Loh, “Autonomy and Responsibility in Hybrid Settings” in Robotic Ethics 2.0 (2017) and Lin, Patrick, 2014 “Here’s a Terrible Idea: Robot Cars with Adjustable Ethics Settings”, Wired, August 18, http://www.wired.com/2014/08/here’s-a-terrible-idea-robot-cars-with-adjustable-ethics-settings/.   Arguably, it would be in the manufacturer’s legal interest to push responsibility for choosing among a range of ethical preferences and performance tradeoffs to the owner/human occupant of the car as much as possible.

The point is, as AI-powered technologies become a real-world problem and not just a thought problem, we can expect this push-pull of choice and accompanying responsibility to become a veritable game of tag, with choice becoming more a liability than a privilege.

As to new “choices” and social media,  the “attention resistance” movement celebrated in Cal Newport’s Digital Minimalism: Choosing a Focused Life in a Noisy World (2019) offers a potentially restorative set of practices for pushing back against capture by Big Tech of all of our unspoken-for moments. Newport, a computer science professor at Georgetown and author of the very popular 2016 book Deep Work is becoming the careful spokesperson for thoughtful engagement of media.  He acknowledges and cites freely the pioneering work of Tristan Harris and others in this area, but has his own longstanding series of books on the practice of “focus”.  “Digital minimalism” is his coined phrase for a philosophy of life that acts as “a human bulwark against the foreign artificiality of electronic communication, a way to take advantage of the wonders that these innovations do in fact provide . . . without allowing their mysterious nature to subvert our human urge to build a meaningful and satisfying life.” (252)

For at least three reasons Newport is more than just a Marie Kondo for digital practices.  First, he makes no bones that consumers of social media are in a David and Goliath battle with corporate behemoths whose entire business models and near trillion dollar capitalizations rest on defeating conscious limiting choices, i.e.,  “Your Time = Their Money”.  Second, he convincingly demonstrates how surrender to a life of constant “likes” leads to low-grade anxiety and unhappiness.  Finally, he grounds one’s choices about social media engagement in life values and a clear-eyed accounting of the opportunity cost of an exclusive diet of digital “fast food”.  He advocates a 30-day cleanse, followed by extremely careful choices about what aspects of social media to reengage, together with specific and practical ways to boundary future use.

All of these books and articles demonstrate that we already have real choices to make now about how we perceive ourselves as fully-endowed humans, about the values underlying our current choices for engagement with AI-powered attention-grabbing

applications, and the life and death future choices we may be required to make in the ordinary adoption of coming technology.  The need to make such choices makes systems of values that have been considered and developed for thousands of years more relevant than ever.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter