I read Dr. Eric Stoddard’s piece with abundant gratitude for the ways in which he brought a serious theological analysis to the morality of our increasingly surveilled world. I was particularly struck by two of his assertions regarding the way in which religious principles speak to the challenges of AI and its potential threats to human dignity. And though he comes at his analysis from solely a Christian perspective, it encompasses many of the same values found in Judaism and the Abrahamic faiths more broadly. Specifically, Stoddart’s focus on the notions of the common good and the implications of God’s gaze for determinations of justice provide important insight as we look to align the moral implications of AI with its material evolution.
The Golden Rule as a pithier reflection of the common good has its foundations in the Jewish biblical notion to “…love your neighbor as yourself.” (Leviticus 19) In Jewish tradition this is considered a distillation of the entire Torah and its interpretation, reflecting the human capacity for both imitateo dei (imitation of God) and the attainment of the elusive “kedusha,” or holiness. It is the most profound affirmation of empathy and compassion that far transcends the mere colloquialism of “walking a mile in another’s shoes.” And though the liberation theology with which Stoddart imbues his observations is specific to Christianity, and though this kind of Jewish sacred identification with others encompasses every station of human experience, the Torah’s repeated mandate to care for the most vulnerable in society seems to fit well with his larger notion that power through surveillance must be constrained by conscience to protect the powerless.
Stoddart’s assertion that surveillance should be aligned with the notion of God’s gaze brings to mind a rabbinic teaching that confronts another enduring challenge: Theodicy, or more popularly, Why bad things happen to good people. The renowned Rabbi Akiva taught a famous paradox: All is foreseen, but free will is given. On one level, this seems to exonerate an omnipotent God from charges of negligent homicide, if not it’s much later and larger incarnation in genocide, with disturbing implications for God’s passive witness. In essence, for whatever reason (and many have proffered countless), God does not intervene in human choices that often result in hurt and harm, despite omniscience. The oft-cited reason: The blessings and benefits of free will for the human condition far outweigh the risks of its abuse.
But on a deeper level, Akiva’s paradox aligns well with Stoddart’s thoughts on the justice inherent in God’s gaze, and the evoking of human response and agency in serving as necessary and indispensable extensions of God’s will. We become God’s eyes and ears, hands and heart in projecting God’s vision and conscience upon a broken world, fulfilling the ideal, if not the insistent command, to actively repair that which will bring this vision to fuller realization. God’s foresight and AI are not merely neutral, without value or valence. The impact and consequence of both depend upon how they are used, with what intention, and the consequence of their employment.
Finally, the great legalist and physician, Moses Maimonides, speculated in his philosophical treatise, the Guide for the Perplexed, about the essential meaning of anthropomorphisms of God. In essence, they are figurative and allegorical, which may seem obvious to us moderns, but was far from a consensual axiom throughout Jewish history. God’s “seeing” or “gaze” is not that produced by a sensory organ. Rather, our language lacks the nuance and precision, if not the authentic experience, to describe God’s “visual” apprehension.
And so, God’s gaze as an analogy to the dual consequences of AI and surveillance—ie, increased security v.s. loss of privacy—is grounded in the larger aspirations of faith: Power to protect obligates responsibility for equity, justice, and a balance bounded by the need, as Thoreau admonished in Walden, “…to not become the tools of our tools.”