Expanding Our “Social Imagination”
Brian Brock is AI and Faith’s Advisor on ethics and disability. As the author of the pathbreaking Christian Ethics In a Technological Age and holder of an endowed chair in Moral and Practical Theology at the University of Aberdeen’s School of Divinity, History, Philosophy, and Art History, Brian has thought a lot about AI’s potential to change the human. He has no objection to it on ethical grounds so long as we focus on the right goals.
That’s the problem.
“One of my deep theological questions about AI is whether the people who are developing it can even conceive of an expansion of our social imagination,” he said in a recent conversation. Rev. Harris Riordan of the Unitarian Universalist Fellowship of Boca Raton drove our questions.
“Without it, I think we’re bound to remake the human in ways that are violent, especially against those who are least able to protect themselves,” he said.
Faith Claims in Fuller View
“These teams were defending this kind of experimentation with a theology I didn’t recognize,” he said. “I realized I was going to have to study more theology to intelligently respond.”
By 2003, he had completed his doctorate in Christian ethics at King’s College in London.
“People say AI is coming and we need to get ready. I think that claim is as speculative as the claim that the Kingdom of Heaven is coming, so I try to bring into fuller view the faith claims that are woven together with political or technocratic suggestions.”
Changing the Human Toward What End?
Brian distinguishes between mere metamorphosis, “a changing of form according to the world and the logic as we currently understand it,” and transformation, which creates the potential for a “peaceful reciprocity that is collective in its moral aspiration.”
Growing up on the ship channel in Houston next to the largest refinery in the world, “I knew what it means to live across the fence from big business,” Brian said. “I think it’s dangerously naive to trust Big Tech, or Big Oil, to give us a healthy relationship to the natural world or the social order. That’s at the root of a lot of questions I raise.”
“The ‘AI-is-coming-no-matter-what’ story seeks to evade the interim period of the rise of this new world,” he said. “If you start in the shadow of the refinery, you can ask some pretty concrete questions about what we’re sacrificing already to get there.”
“Rather than letting our dreams live in the outcome in which we can command our entire material world by thinking it, asking practical questions will focus us more on the present and the immediate future.”
“The really important question to keep in sight as we talk about remaking the human is: To what end?”
The Specter of Surveillance Capitalism
Brian finds particularly concerning the fact that AI development is being driven largely by surveillance capitalist economics, as described by Shoshana Zuboff in The Age of Surveillance Capitalism.
“If you want to understand what’s happening right now in the Internet-wired world, just think of it as a new oil boom with information as the new oil. Rockefeller and Zuckerberg are parallel phenomena 100 years apart. We have to ask how AI is serving as an avatar in taking political power away from governments and handing it over to business.
“A world in which you have no open political conversation but live in silos of conversation is very unstable and is almost impossible to govern. You can’t move groups of people without covertly manipulating them. A world in which people are moved by nudges and invisible incentives rather than explicit political speech is ripe for all kinds of power abuses.”
Forced Enhancement
Amazon’s punitive approach to managing warehouse workers that fail to meet efficiency standards points toward an on-demand economy that makes “almost inevitable” the forced deployment of neural implants, Brian said.
“If the company says you need to have an implant that scans bar codes through your eyes, workers will have no way to resist it. You’d have to be constantly upgrading as with any technology. That’s not very far away from some of the dystopian movies we see in which the people at the bottom of the social food chain are forced into enhancement.”
“If Exxon could have done it 50 years ago, they would have. What we shouldn’t hope is that magnates of business suddenly will become genuinely interested in improving the lives of their workers.”
‘PR Stunts’ for the Disabled
Brian’s third category of worry is the way Silicon Valley treats disabled people.
“If you look closely, you see they’re helping disabled people only to gain support for controversial technologies like neural implantation. They want the public to get past the sense that it’s sort of a grotesque thing to be drilling holes into peoples’ heads. They drill into disabled peoples’ heads because we think, ‘Oh, it’s so terrible to be disabled that they’re not going to object to having their heads drilled.'”
“The cash value of a given disability is directly related to how effective they are in serving a PR stunt that shows Silicon Valley’s benevolence in making their lives better.”
Newness as Our ‘Moral Norm’
“We can say, ‘I’d rather AI not come because I’m afraid of what comes.’ Or we can say, ‘I want AI to come because I want to live forever.’ These are theological questions.”
“I came to these and other questions in the medical context when I started asking about the ballooning costs of modern health care. We’re all aware that the kind of gross cost of health care is starting to weigh down every developed economy, so I was asking, ‘What’s that about and what do we do about it?’
“We all assume that new technology is better, and we need to have it. The more I looked around, the more I realized that we don’t even have a way to parse newness as anything other than good. It Is our moral norm.”
“Where this ultimately takes my theology is this: If we don’t want to commit ourselves to inhumanities, we need to be interrupted by God because the law of the world that we’ve built demands certain kinds of inhumanities.”
The ‘China Threat’
Investing billions in the development of AI and related technologies, an authoritarian regime like China is less likely than a democratic nation to value civil liberties in the deployment of neural implants. That may give them an advantage in developing the so-called “next generation of humans.” Brian acknowledges the argument.
“It’s the same that’s used in nuclear armament,” he said. “If China gets a bigger nuclear weapon than ours, are we committed to getting bigger nuclear weapons indefinitely or is there a way to say: ‘We’re just not going to do that.’
“If you can’t hope in anything other than having the biggest military advantage, then it’s not clear to me why you have faith at all.
“If you worship power, power has to keep building itself up until it just collapses all at once because it won’t fail halfway. We might be seeing this with Putin right now. That’s just the dynamic of putting faith in power. It works for a while, but it doesn’t work indefinitely.”
Inviting People into Hope
To engage faith communities in this conversation, Brian suggests asking, ‘How can we help people avoid falling into defensive nostalgia?”
“That’s a big problem. Cultures that are under pressure and in which people are feeling threatened very often seek to escape change because they’re threatened about losing what they have.
“It’s a very human thing to think that — when our concepts of the good life, the heroic life, and the virtuous life are threatened — the only way to preserve them is to put up higher barriers against those we take to be threatening that world.
“The theological question is: Why should we hope for something good for me to come through change? And how do we invite people into that hope?”
The Experience of ‘Opting Out’
Brian suggests that faith communities from time to time “step out of the conformity with the laws of our time just to see what’s at stake when we conform to them.”
“It’s important to come into some tangible touch with what opting out means,” he said. “So let’s go away together for a week, leaving our smartphones at home. Then let’s reenter our normal lives for a week without our smartphones and see what goes wrong. That’s a diagnostic act.”
In refusing to carry a smartphone at all, Brian continually lives this “diagnostic act,” which he also calls an “experiment.”
“It’s getting very difficult for me to fly,” he said. “It’s possible, but especially with Covid, the system is now so tight you can’t get through Amsterdam or Paris without a smartphone.
“It’s a similar thing living with my eldest son, who’s 18 and has Down syndrome and is autistic. I travel through the world with him and I see what the world is not set up to let him do. That includes typing in passcodes, reciting PINs, and attending school online. Who’s being disenfranchised is a basic question we must continually ask of our technologies.”
On Becoming a Cyborg
Taking at face value Elon Musk’s forecast that we’ll have a “whole-brain interface with AI” by 2045, we ask all experts who join us on AI and the Human: Are you ready to become a cyborg?
Brian’s response surprised us.
“I already am one,” he said. “I take antibiotics. I use spell-check. I’d wear a prosthesis if I needed one. Do I think I need to be enhanced in my emotions or my mind? No, I don’t.
“Transhumanism is the desire to escape death, but certain kinds of inhumanity come with that. Uploading oneself into a computer is a way to resist facing our finitude. When we stop facing our finitude, we fall into unexplored and unexpressed desires to preserve the status quo.
“I would be willing to change my body but in doing so I would hope always to look for what’s traditionally called reconciliation. A world in which we change ourselves to give life to others looks quite different from one in which we change ourselves to escape death or to escape loss.
“I’m ultimately for a world in which we change ourselves not to accelerate or retrench the strong preying on the weak but to build the capacities of others. That’s more than metamorphosis. That’s transformation.”