Our ability to self-consciously “think about our thinking” is one of our species’ foundational distinctives. Social psychology research over the past 40 years has up-ended our understanding of the way we think. It turns out we are much more likely to employ mental shortcuts than emulate the cold logic of Sherlock Holmes.
Understanding better how we humans think is important as we develop artificially intelligent “thinking machines” for at least three reasons. Two have to do with the quality of thinking we are seeking to create. First, our use of mental shortcuts has a suppleness and flexibility that cold logic alone cannot emulate. We’d like to pass that along. But second, and on the downside, our shortcuts rely on many biases and assumptions that result in choices and actions that in turn produce the data on which machine learning goes to school. If we want machines to think better than we do, we need to account for such biases in the data our own thinking produces.
The third reason, and central to this article, is about how we manage our new thinking machines. I suggest that we will better manage AI-driven technologies for human benefit and not destruction by thinking well across time and space, that is, by using wisdom derived from experience over time and grounded in the reality of cultural differences across geography (space).
Let’s first consider what this kind of thinking is, and then test this proposition by considering the role of technology in the three great disruptive forces roiling our present moment – COVID 19, racial injustice and the erosion of truth in politics.
Thinking Fast and Slow
Understanding how we think has been a long term project for our species, traceable in a three millennium-long line from the classicists Aristotle and Plato, through Enlightenment philosophers like Descartes, Kant, and Hume, to the current work of social psychologists like Daniel Kahneman, Jonathan Haidt, and Daniel Aerily. The last are especially concerned with whether we think intuitively, rationally, or through some combination of each.
In his perennial best seller Thinking Fast and Slow, Kahneman posits two systems of thinking that inform our judgments and decisions: System 1 is fast, intuitive and emotional; System 2 is slower, more deliberative and logical. Each approach has its merits in differing circumstances. Kahneman’s work has been popularized by other best-selling writers (e.g., Michael Lewis in Moneyball and The Undoing Project and Malcom Gladwell in Blink) and applied in specific disciplines like economics in the pioneering podcast Freakonomics as well as Daniel Aerily’s Predictably Irrational books. All of these authors make the case that we can’t follow our parents’ urging to “make good choices” without understanding the shortcuts (“heuristics”) we use in our intuitive thinking and adjusting for the biases inherent in them.
Jonathan Haidt extends this work to the question of the origins and methodology for making good moral judgments. He argues that everyone’s moral judgment is based primarily on intuition rather than conscious reasoning. But recognizing that not all cultures think alike, Haidt posits a theory of five innate moral foundations to explain moral differences across cultures.
A great synthesis of Kahneman and Haidt’s work by a person of faith is Baylor University humanities professor Alan Jacobs’ How to Think: A Survival Guide for A World at Odds. Jacobs challenges us to set aside the lazy habits spawned by our age of emotionally intense social media and return to the lost art of actually thinking. Finding truth and fault in both Kahneman and Haidt’s research and theories, Jacobs offers some basic rules for thinking well from essayists and philosophers, and in ways that will allow us to better live together. My lasting takeaway is his point that there is no such thing as “thinking for ourselves” – everything we think about is influenced in some fashion by community and the forces around us.
All of this is essential reading for AI and Faith, as we seek to mitigate risk and enhance benefits from AI-powered technologies using values based on religious belief.
Matching Fast and Slow Thinking to the Time/Space Continuum
The fast/slow dichotomy is based on speed, and hence time. Current technology thinking considers this speed in units of minutes, hours or days, even for slow thinking. For the technology world, thinking slow seems to be allowed for only as long as it takes to collect and analyze data through AI-powered tools like machine learning.
But bring the humanities into consideration and time broadens out to as long as the importance of the decision and other constraints will justify. This slower time allows the introduction of wisdom, definable as adding experience to knowledge for the purpose of making good decisions and judgments. Such experience distilled into wisdom we say has “stood the test of time.”
Then bring in the major religions adhered to by our AI and Faith Founding Members (Judaism, Christianity, Islam, Buddhism, Hinduism) alongside the other humanities disciplines, and the process now includes an element of received wisdom: both foundational writings like the Bible and the Koran that are believed to have a divine origin, and thousands of years of applying those writings to the challenges of living.
Faith traditions’ answers about human distinctiveness and the duties we have to each other have been applied across time and space to manage risks and rewards from a wide array of earlier disruptive forces – natural forces like disease, drought, and cataclysmic geology; and man-made disruptions like weaponry (e.g., crossbows, cannons, machine guns, missiles); means of communication (language, writing, printing, telegraph, telephone); political/economic/social forces (feudalism, rule of law, slavery, capitalism, socialism); transportation (horses, railroads, airplanes, rockets) and sources of energy (water, steam, gas combustion, nuclear). The application to these disruptions of received wisdom and values and ethics derived from it has produced an enormous trove of experience that can be equally applied to today’s rapidly emerging technology innovations.
The technology world for the most part seems to view slower, wisdom-based thinking as undesirable friction in their efforts to be the first disruptor, most especially in the past decade of “moving fast and breaking things.” And yet a conversation I had just last week with the CEO of Ark Investments in New York, Cathie Woods, illustrates the value of such alternative thinking.
Woods built ARK from $0 in 2014 to $19 billion in assets under management (AUM) today, using analysts trained outside the usual financial community mold. ARK’s investment strategies, per its home page “focus on public companies that we expect to be leaders, enablers, and beneficiaries of disruptive innovation” and aim for “long-term growth with low correlation to traditional investment strategies.”
As Woods sees it, the vast majority of investment trading has fallen fatally under the spell of blindingly fast, narrow band quantitative thinking. Such “quants” have wrung every last nano-second of speed out of data transmission in a competition to be first to execute on data trends, on the basis of indecipherable algorithmic decision making. Meanwhile index funds have come to dominate retail investment trading based – by design – entirely on past performance. When a “black swan” like a pandemic comes along, the index funds have little option but to ride out the wild market fluctuations. The quantitative algorithms, having no experience with such an event, trade on criteria which fail to take into account unique opportunities that dynamic human analysis, drawing on a broad spectrum view of history and experience, can better foresee.
Wood cites the example of 2U (TWOU), a successful online learning platform with cash reserves insufficient to satisfy the conditions that the trading algorithms demanded as a filtering criterion for holding the stock when COVID disruption finally caught the market’s attention in March. As a result, TWOU plunged nearly two-thirds in value within a few days. ARK’s human analysts by contrast recognized that social distancing from the virus presented 2U’s online platform with a golden opportunity that would attract partners and financing. ARK then bought TWOU aggressively before the market rebounded in April. Through this kind of analysis, ARK’s AUM increased from $14 billion to $19 billion just in the last three months.
Matching fast and slow thinking to the continuum of space – in the form of particular cultures and places – is equally important for wise management of AI deployment. Much of the challenge that Big Tech’s global ambitions face today is a misfit with culture and legal systems. It turns out EU regulatory authorities have a very different set of assumptions about privacy and trustworthy AI than those of the largely libertarian founders in Silicon Valley. China has another set of values and assumptions altogether, in the other direction. The one-size-fits-all design of the Internet and subsequent social media in the ‘90s and ‘00s has collided with these differing cultural and legal assumptions to create immense challenges for Big Tech today. As with wisdom that has stood the test of time, nuanced thinking about differences in culture across space is not a friction to be avoided, but a necessity to be embraced and thoughtfully addressed.
Bringing Wise Thinking Across Time and Space into Today’s Current Huge Disruptors
Our current moment in time is being convulsed nationally and globally by the surprising resilience of the COVID 19 virus; a (hopefully) historical shift in our national conversation on systemic racial injustice; and a presidential election in which false facts are playing an outsized role.
COVID 19 is not only outrunning the US’ fitful response, but it appears to be speeding up by almost a decade fundamentally disruptive change in industries across the societal spectrum. The virus is testing our national identity in ways we could not have imagined at the beginning of 2020. It has become apparent that an embrace of extreme libertarian, free market thinking, limited government and personal liberty over the past 50 years in a subset of America’s states together with a geographically distributed public health system, have produced a terrible posture for fighting COVID. Unlike our politics and public health authorities, COVID has no spatial boundaries. We now know what was needed – what Taiwan and New Zealand had but the US lacked – a national response by people dedicated to do what the best scientific evidence required for the entire community. Now as contact tracing can benefit from location-based personal information, we are also ill-equipped to weigh the tradeoffs of economic benefit and a return to work against civil liberties like privacy. The same may be true for balancing AI-powered research and testing technologies in vaccine development against lessons from rushed vaccines in the past.
What have we learned about wise decisions across time and space from the huge conversation on racial injustice arising out of the latest unjust deaths – those this spring of George Floyd, Ahmaud Arbery, Rayshard Brooks, and Breonna Taylor? A strong case can be made that Steve Jobs’ cell phone design decisions in the late 00’s which placed highly functional video cameras in the pockets of people across America, coupled with the power of social media to instantaneously, cheaply and widely distribute the resulting images, has finally led to a weight of undeniable evidence that cannot be ignored. Simultaneously, a powerful journalist-led retelling of our national four-century long history of racial injustice (the New York Times’ 1619 Project), has created a new popular framework for evaluating the true historic meaning of Confederate statues, Dixie flags, and Lost Cause/white supremacist narratives. But, as with a technology-driven response to COVID, a key question remains: can we deepen, widen and render permanent this beneficial change of heart, while simultaneously boundarying the use of similar technology (ubiquitous cameras, AI-driven facial recognition, and other tools of policing) so that our whole society does not become a variation of China’s surveillance-powered social control system?
Our politics of gaining election at any cost to truth via the same social media platforms that enabled the racial injustice conversation, has placed Big Tech companies in an historically acute quandary. Do they become censors of the public dialogue to preserve their corporate models, and if they do, will it cost them the “neutral bulletin board” status of Section 230 that facilitated their rise in the first place? An assessment is called for across time of the original and continuing utility of Section 230, coupled with an assessment across space, including globally, of Big Tech’s influence on speech today.
In sum, as Alan Jacobs writes in How to Think, reflexive reaction is easy, reflective thinking is hard. But it is also essential to foster the world in time and space we wish to live in.