Kenneth Cukier is the deputy executive editor at The Economist, following two decades at the paper as a foreign correspondent, technology writer, data editor and commentary editor. He is the coauthor of the bestselling book “Big Data” with Viktor Mayer-Schönberger, which was translated into over 20 languages, and “Framers” on AI and mental models, with Viktor and Francis de Véricourt. Previously Kenn was the technology editor of the Wall Street Journal Asia in Hong Kong and worked at the International Herald Tribune in Paris. He was a research fellow at Harvard’s Kennedy School of Government and an associate fellow at Oxford’s Saïd Business School. Kenn was a board director of Chatham House in 2016-22 and is a member of the Council on Foreign Relations. He attends the Quaker Meeting House in Richmond, England. Read his interview with AI and Faith’s founding member, David Brenner.
David: Kenn, we’re delighted to have you join AI&F as an Advisor. In your several decades as technology journalist, and as The Economist’s deputy executive editor, you’ve had an amazing window on how technology has impacted global business, politics and society. What has surprised you most over this period and do you see this surprise as a net positive, negative or neutral development?
Kenneth: It’s an honor to be a part of AI&F, and contribute to the community: the themes we are considering are among the most important in the world, as you know. What’s surprised me most about technology and society over the past few decades has been the ideological fault lines that exist: those who are naturally pro tech versus anti tech — utopians and doomers — without discriminating subtler matters like “what tech specifically” and “for what usage” and “by whom”? In the age of invention during the 1800s, technology was feted. After the second world war, with its atomic bomb, following the first world war’s gas attacks, technology was regarded as destructive. So the lack of discernment of when to welcome tech and when to be cautious is odd. I believe what Reinhold Niebuhr said about religion applies to technology: it makes good people better and bad people worse.
David: In 2013 your book “Big Data,” co-authored with Viktor Mayer-Schonberger, became a bestseller, and in your 2021 book with him and Francis De Vericourt, “Framers: Human Advantage in an Age of Technology and Turmoil” was critically well-received. Are there common threads in these books that carry forward to your key interests today?
Kenneth: You’ve put your finger on something powerful: the second book, on mental models, is in effect a sequel to the first book, on AI. Although data can transform how we live, work and think, its usage doesn’t happen in a vacuum — it relies on a model. In one sense, that model is a statistical approach. But in a deeper sense, it is a mental model: a way of looking at the world. So using one “frame,” a rainforest is worth more when it’s cut for timber than when it’s acting as the lungs of the planet. How we look at the world is essential, to apply AI in useful ways. My next book, which I’m writing solo, advances the argument further, looking at spirituality and AI — why we can do things machines cannot because of our capacity for the transcendent, and a wisdom born of what Pseudo-Dionysius the Areopagite called apophatic theology, summed up nicely centuries later in the manuscript entitled “The Cloud of Unknowing”. AI uses information: humans at our best, find answers in the still, small voice within.
David: I was pleased to read “Framers” because as I grow older I’ve been asking “what kind of model of the world am I carrying in my head and how can I continuously improve it?” I’m thrilled your book addresses this. In the opening chapters, it describes frames as our always-on mental models that “determine how we understand and act in the world”. While they usually operate in the background, your book foregrounds them as tools for better decision making, e.g., “the right frame applied in the right way opens up a wider range of possibilities, which in turn leads to better choices.” (p.5) Please tell us about the research that supports your work and what is driving it?
Kenneth: Over the past few decades a quiet revolution has taken place in decision science, psychology and cognitive science. The idea of “mental schemas” or “cognitive templates” or “frames” that we carry around with us at all times, has gone from a basic feature of cognition to a muscle or tool that we can be aware of, develop, strengthen and apply in deliberate ways. People with high-stakes roles to make decisions, like military planners and investment professionals, are aware of these techniques and apply them to hone their decisions and avoid bad ones. The book aimed to expose this so that “the rest of us” can benefit from the approach. So we looked at the planning behind a famous commando raid that rescued hostages, the “visualization” techniques of high-performance athletes and even the creative constraints that Dr Suess used to write the classic children’s book “Green Eggs and Ham”.
David: The human advantages over AI you write of in “Framers” are the so-called “three Cs” — understanding causality, generating counterfactuals, and applying constraints. Until I read it, I did not sufficiently appreciate counterfactuals. Thanks for that alone! The book predates the rise of generative AI and the revolution of LLMs, GPTs, and retrieval-augmented generation (RAG), a technique for enhancing the accuracy and reliability of generative AI models with facts fetched from external sources. With these extraordinary developments, do you still believe humans hold an advantage in all three of your Cs?
Kenneth: Absolutely. In fact, those human advantages are even clearer than ever. LLMs hallucinate, underscoring the fact that they have no inherent sense of causation, and the counterfactuals have no constraints but are just an explosion of unboundedness. Yet LLMs also work fabulously. Now ask yourself: why? When LLMs produce the strong answers, it’s because of the training data — which itself embodies people’s mental models, containing causality, counterfactuals and constraints. Human frames are a part of AI systems that make them work. In fact, the very process of training an LLM has a second stage that’s called “human feedback from reinforcement learning” (HFRL) which has people score test responses to downgrade bad ones and upvote good ones, that “fine-tune” the model. That’s the very essence of “constraints”!
David: A chapter toward the end of “Framers” advocates for “frame pluralism”, maintaining frames from a wide variety of cultures and perspectives to avoid frame monocultures that stifle innovation, creativity, and even freedom. “Frame pluralism’s very goal is for frames to compete, complement, contradict and coexist with one another”, you write (this time four Cs) at p186. How can we gain the benefit of such “frame pluralism” societally and within our own mental models while also adhering to empirical verities and faith beliefs — things that we personally hold to be objectively true?
Kenneth: What a question! The first part of the answer is: it’s not easy. Different people will have different frames, some that we may find repugnant, but we need to respect that they are entitled to represent the world as they judge fit, just as we are. It’s the classical “19th century liberal” doctrine of intellectual freedom. The second dimension is that all frames, even bad ones, should be tolerated save for one — a frame that denies other frames. That erodes freedom and is unacceptable, which Karl Popper warned about in “The Open Society and its Enemies”. Perhaps the best way to think about it is to invoke the classic phrase from Oliver Cromwell in 1650, in a letter to the General Assembly of the Church of Scotland: “I beseech you, in the bowels of Christ, think it possible you may be mistaken.” The phrase is a pillar of Quakerism; a reminder that no one person has the total answer but that together, with mutual respect and good faith, we might find answers.
David: You joined AI&F initially in our Member category, and then we were pleased to discover that you are in fact very much a technology expert and willing to accept our invitation to be an Advisor. What is it about the work and mission of AI&F that drew your attention and caused you to join us, Kenn?
Kenneth: It was an honor. No other organization is dedicated to bringing together a diverse set of people from different backgrounds to think together about what will be one of the most critical subjects of our time. From what it means to lead a good life, to whether machines can be conscious, or if the ambition to create new forms of intelligence represents life, and if to endow it with a simulacrum of rights is idolatrous — these are crucial questions for all of society. There’s a major role for faith groups to have a voice in the public sphere to shape the evolution of the technology. It is clear that AI represents a landmark in human civilization, just as it is clear that the technology will bump up against religion, the timeless experience of people of all places and all times to appreciate the world through a spiritual sense that they are a soul amid a divinity. That interaction between what is literally made from man in our image, AI, and religion that transcends us, will be a defining aspect of our era — and AI&F is a vital forum where the conversation is taking place.