Interview with Dr. Ted Peters: On AI vs. IA and Theology of the Future

Dr. Ted Peters, (Ph.D., University of Chicago) teaches seminars in systematic theology and ethics in the Theology and Ethics Department at the Graduate Theological Union in Berkeley, California, and is a Founding Member of AI and Faith. Along with Robert John Russell, he co-edits the journal, Theology and Science, at the Center for Theology and the Natural Sciences.

Dr. Peters recently edited a new volume on artificial intelligence, AI and IA: Utopia or Extinction? (ATF 2018).  Peters is about to publish a new co-edited book, Astrobiology: Science, Ethics, and Public Policy (Scrivener 2021). Along with two colleagues, Arvin Gouw and Brian Patrick Green, he is now editing a new anthology, Religious Transhumanism and its Critics (Lexington 2021).

Visit his website: TedsTimelyTake.com.

 

Q: Technology goes hand in hand with change, progress, and newness. How did you, a Christian theologian, become interested in our technological future?

My brother, Rob, wore a Swiss watch with no face on it. He enjoyed watching all the gears go round. That’s the kind of family I grew up in near Detroit. My father was an automotive engineer with twenty-two patents. Our basement was an inventor’s paradise with tools galore. I learned to use a screwdriver but not much more.

On one occasion when we’d replaced the old couch with a new one, my mother patted the cushions and said, “I like new things.” By “new,” she implicitly referred to new things that had never before existed until recently invented.

My son, Paul, seems to have inherited my family’s genes. He’s now an engineer who takes delight in inventing new electronic gadgets that do marvelous things. Just conceiving of the next invention elicits excitement.

Perhaps this ambient family context led me to give special attention to the prophecy in Isaiah 65:17: “For I am about to create new heavens and a  new earth; the former things shall not be remembered or come to mind.”

I have become a theologian of the future, so to speak. I love to connect futurum, the evolving and progressing human future, with adventus, God’s surprising future. My comprehensive systematic theology is even called, God—The World’s Future.

 

Q:  You’ve been writing and speaking about eco-ethics for over 40 years and bioethics for 30 years; and more recently you have engaged subjects that cross into data ethics, such as cyber enhancement and transhumanism. How in your mind do these various topics connect?

When I departed graduate school in the 1970s, I attempted to connect what I envisioned as God’s future with secular futurism, sometimes called futurology or ecology. I saw the theological and the secular sectors converging in a very healthy way.

Then called “futurists” and now called “ecologists,” respected secular pioneers were attempting to understand present trends, distinguish alternative possibilities and probabilities, and then decide on the preferred future. I labeled this the “u-d-c” formula: understanding, decision, and control. As it applied previously to technology, the u-d-c formula became the internal structure of futuristic thinking at large.

The scientific community had begun already in the late 1960’s to issue prophecies: unless we people of Earth would repent from the habits that are destroying the fecundity of Planet Earth and embrace life-giving ethical policies, we Homo sapiens would sever our own roots and die off. Just as God had raised up the prophet Amos from the obscure town of Tekoa, God was raising up scientists—especially computer scientists working with environmental scientists—to shout warnings to the world. I wanted to shout with them. So, I did.

But, alas, no one other than President Jimmy Carter and then Senator Ted Kennedy listened. With the election of President Ronald Reagan in 1980, America became deaf to the prophets. And today our society continues its unbridled orgy of production and pollution.

In 1990 I became Principal Investigator on a NIH grant to study the “Theological and Ethical Implications of the Human Genome Initiative.” I put together a marvelous team of genetic scientists, theologians, and ethicists. As for my own work, I applied the method to genetics that I had previously employed for ecology. This meant that as a theologian I could proffer ethical ideals and middle axioms to the secular public regarding the preferred future that genetic research should take.

I was on the ground floor when the first human embryonic stem cells (hESC) were isolated in 1998. So again, along with my distinguished colleagues in bioethics, I applied the method of analysis, synthesis, and recommendation I had developed previously. In principle, I could apply what I’d learned from the ecology controversy to any progressive bio-technology. This readied me for what was coming next, namely, nano-technology, neuroscience, and Transhumanism.

 

Q: You’ve just written an entertaining and thoughtful short essay about Elon Musk’s proposed Neuralink deep brain implant.  How do you weigh the benefits and risks from such a device? 

Elon Musk is my kinda guy. His imagination won’t quit. He can imagine alternative futures and then specify which are the possible and which are preferable futures. He may overestimate his ability to control the future, but I credit him for giving it a go.

First, we need to distinguish AI (Artificial Intelligence) from IA (Intelligence Amplification). Musk and his techno-genius friends have issued a prophet-like warning against AI and Transhumanism: do not develop a machine so intelligent that it will take over ruling our world and perhaps render our species, Homo sapiens, extinct.

That’s AI. Second, when it comes to IA, that’s a horse of another color. Musk is attempting to perfect IA through deep brain implants for medical therapy. I think everyone should agree: this is a great idea! I hope he and the medical community succeed. The bioethical principle applicable here is beneficence, namely, if we have an opportunity to enhance human health and wellbeing, then we are morally obligated to pursue it.

The theoretical problem with IA lies elsewhere. It lies in the transhumanist assumption that human intelligence correlates with the sort of data access we see in the computer. The human mind is an information pattern, transhumanists assume. Therefore, on the basis of this misleading assumption, IA consists of increased data access through deep brain implants and such. If we were to provide the brain with internal access to Wikipedia inscribed on that implant, would this make the individual more intelligent? I don’t think so.

Is there any moral problem in providing Wikipedia to the human brain? None that I can see. I simply suggest that such data access would not affect human intelligence, because human intelligence is tied inextricably to human selfhood. No increase in access to information would affect the human self. A point missed by transhumanist theorists.

Finally, I celebrate technological innovation. I want to shout from the rooftops when a breakthrough occurs, especially a medical breakthrough. My only caution is this: technological progress remains futurum. We must avoid overestimating technology. We must avoid expecting too much. No amount of technological progress can improve the human condition in such a way that we reduce sin, evil, suffering, and destruction. Voluntary deafness to the ecological prophets demonstrates this. Only God’s future, adventus, can heal human sin.

X