New! Become A member
Subscribe to our newsletter
Insights

AI&F Well-Situated to Catalyze in a Deeply Disrupted Time

We are living in an extraordinary time of global disruption: a persistent pandemic, accelerating climate change, a new appreciation of the extent of deeply embedded racial injustice, growing authoritarianism and vocal populism, deep political polarization, a new China/US cold war, all are deeply disquieting and altering our behaviors – for good or ill. Throughout these disruptive developments runs a common thread of various forms of technology, much of them powered by analytic tools employing artificial intelligence.

A good deal of this disruption was on the horizon two and half years ago when we launched AI&F, but key developments were not. We could not have foreseen:

  • the speed with which COVID19 has transformed markets and vastly increased the power and utility of online commerce and communications across society – from retail to education to health care to every other form of social engagement, even including faith practice.
  • how (notwithstanding the prescience of Kai Fu Lee’s AI Superpowers) the confrontation of a recklessly ambitious and defensive Chinese premier with a recklessly provocative and self-interested US president would transform a trade war into a budding geopolitical Cold War,.
  • how even as China’s lockdown of disfavored minorities has provided a window on the extraordinary reach of surveillance technology for political and social control, we have failed to perceive the quiet rollout of similar technology across American society – after all, it’s just business.
  • how U.S. politicians’ loving embrace of Silicon Valley campaign contributions has transformed into a love/hate relationship, pitting the deep continuing monetary influence of Big Tech on American financial markets and politics against the very real deterioration of political discourse on social media and related technologies.

One constant holds in the midst of all this change: neither government nor Big Tech carry much public trust – government due to decades of eroding public confidence in its capability and authority, and Big Tech growing out of the inherent flaws in the basic deal it made with its customers for unfettered use of their data without thoughtful creation of rights-based boundaries. By and large, Big Tech remains in the vaporware stage of ethics, promising change while lacking a basic values platform on which to bring it about beyond a purely utilitarian, problem-avoidance perspective.  Thus, we not only have a massive disruption but also a crisis of confidence in the parties – business and government – best positioned by their possession of the data and regulatory authority to mediate that crisis.

But some exceptions apply, and it is in the midst of these exceptions that AI and Faith has its greatest opportunities.

  • Microsoft has emerged as a leader in applying an ethical review systematically to the rollout of various products, even as it works alongside Amazon and other Big Tech companies to protect its prerogatives to conduct this analysis privately and not through government regulation.
  • Faith organizations like the Vatican, the Optic Network in Europe, the Ethics and Religious Liberty Commission (ERLC) of the Southern Baptist Church, Biologos, as well as colleges and universities with a religious founding such as Santa Clara University, Seattle University, and Notre Dame are specifically targeting ethics around artificial intelligence as a key influencer on the future of human flourishing.
  • Some of these faith institutions and Big Tech companies are beginning to join forces, as evidenced in The Rome Call on AI Ethics discussed elsewhere in this Newsletter.
  • Faith oriented tech workers are banding together in Faith Employee Resource Groups (ERGs), as tracked by the Religious Freedom and Business Foundation’s REDI Index. In many cases, aided by the Tannenbaum Foundation, a Jewish non-profit focused on peacemaking and mediation, their employers have actively facilitated such growth as an increasingly important part of companies’ “diversity and inclusion” efforts. Most of these groups include single faith traditions but some like Faithforce, Salesforce’s integrated faith ERG, are interfaith in orientation. Currently it does not appear that any of these ERGs include a focus on ethics, but their existence provides an opportunity for AI&F to interject specific ethical issues as “applied faith” questions.
  • Conferences like the PassionTalks 2020 program described in Events in this Newsletter, and Q Ideas in Nashville each spring have provided a means for faith-oriented technology workers to meet and engage outside their employee-based networks, as do large church congregations in tech centers like Silicon Valley, Seattle, and Austin. The nascent digital curriculum described in the News section of this issue is one means by which faith congregations can make this connection.
  • Networks of faith-oriented experts in AI are emerging in other parts of the world such as the UK-based Homo Responsiblis Initiative referenced by Patricia Shaw in her interview in this Newsletter.

 

Thanks to AI&F’s persistent organizing and networking efforts over the past 30 months, we are ideally positioned to work in all of these developing connections.  Here are a few:

  • Our numerous Founding Experts currently or formerly employed at Microsoft, and outsiders whose missions are all about analyzing and engaging Microsoft’s ethics like Nathan Colaner, Mark Chinen and Michael Quinn of Seattle University’s Initiative for Ethics and Transformative Technologies, give us a great window on Microsoft’s industry-leading work.
  • Nathan, Mark and Mike also provide a strong connection point with Founding Experts Brian Green, Ann Mongoven and Don Howard at Santa Clara and Notre Dame respectively, and with the key work centered at the Vatican. We are introducing into that Catholic mix Founding Expert Jason Thacker, Director of Technology Ethics at the Ethics and Religious Liberty Commission of the Southern Baptist Convention.
  • We are bridging the young data ethics discussion with the well-established bioethics world where faith organizations have a deep and longstanding presence. A half dozen of our Founding Experts are deeply engaged in the bioethics world and well-positioned to facilitate that much needed bridge.
  • Among faith ERGs, our new think tank and our cross-faith expert community can play a role for tech corporations similar to that of the Tannenbaum Foundation in linking diversity and inclusion to such important faith and corporate purposes as ethical algorithms. Our cross-faith approach also gives us a unique opportunity to tie into and make introductions across international Islamic, Jewish, and eastern religion-based organizations in other parts of the world. Our network is unique in crossing over with equal sophistication between AI technologists and the humanities disciplines of theology, philosophy and ethics.

Even as these exciting connections are being made, the missing elements have been staff time and energy to explore and facilitate all of these connections, and think tank management experience to  tighten our previously more diffused focus into a genuine, rigorous analytical machine. That’s why the announcement in this Newletter issue of Gretchen Huizinga and Dan Rasmus signing on formally as our first Executive Director and Director of Research holds such promise for AI and Faith’s sustainability and growth. It would be hard to imagine more qualified candidates for these roles than Gretchen and Dan. They have not only been galvanizing our work at the Board level, but both in their own ways have been elevating the spiritual dimension of our Founding Experts’ work in new and exciting ways, even as AI and Faith remains itself a secular organization.

Expect to see further details soon of our continuing reorganization and the priorities growing out of it. These include development of processes and organization of our think tank and a budget and funding plan to support it, driving forward our demonstration studies, and deepening connections and commitment with our Founding Expert community.

Our mission is a worthy and necessary one. Our situation as people of faith seeking to influence the global powers deploying AI parallels that of western intellectuals during World War II who sought to bring Christian Humanist principles to bear on the question of how to organize the post-war world in a way that would not be dominated by the awesome technological power the Allies necessarily had organized to defeat fascism. In our case, it is not only Christian principles but those of the other faiths represented in our expert community. But the question is the same: can faith values help shape and boundary the tremendously powerful forces of technology created in our time?

I take inspiration for success for our effort from the conversation of five such WWII intellectuals woven together by Baylor Professor Alan Jacobs in his most recent work, In The Year of Our Lord 1943: Christian Humanism in an Age of Crisis. Three are brand names: C.S. Lewis, W.H. Auden and T.S. Elliott. Two still have cult followings: Simone Weill, a French intellectual and activist who prematurely wrote herself to death in 1943, and Jacques Maritain, an Irish Catholic writer and politician. Professor Jacobs pulls together key threads of their earnest inquiry just at the point when the Allies’ thoughts were turning to how they would navigate post-World War II society.

Jacobs writes: “The war raised for each of the thinkers . . . a pressing set of questions about the relationship between Christianity and the Western democratic social order, and especially about whether Christianity was uniquely suited to the moral underpinning of the order. These questions led in turn to others: How might an increasingly secularized and religious indifferent populace be educated and formed in Christian beliefs and practices?” Substitute in “faith principles” for “Christianity/Christian beliefs” and “technologically driven society” for “Western democratic social order” and I think you have the pressing question for AI&F in our own time of disruptive crisis.

Professor Jacobs concluded that even for intellectuals of this statute, the secular tide had turned too far for their version of Christian humanism to head off the excesses of the post-war military-industrial complex and create a peaceful world based on something other than raw power.  But the question they posed was taken up by younger thinkers like Jacque Elull (The Technological Society) and his eventual disciple Neil Postmann, whose 1985 book Amusing Ourselves To Death: Public Discourse in the Age of Show Business gave rise to our current vigorous debate over loss of focus and attention occasioned by technology, especially now in the age of social media.

Our hope and belief at AI&F is that it is not too late in our time to shape a better world by faithful intellectual giants, guided by cutting edge AI technologists who deeply understand the risks and opportunities posed by AI on the ground. We anticipate our think tank connecting such intellectuals and experts, channeling funds to the creation of outstanding analysis that will be practically useful to the creators of AI technology, and injecting that work effectively into the rapidly expanding AI ethics discussion, toward our common goal of human flourishing.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter