New! Become A member
Subscribe to our newsletter
Insights

BOLO Everyone! What AI Issues should we look out for in 2022?

We asked three AI&F Research Fellows and an Advisor what is especially on their radar as key risk issues or opportunities in AI for 2022.  Here are their answers, answered in diverse ways we really enjoy and appreciate!

Jason Thacker

Jason is an Advisor of AI&F and serves as chair of research in technology ethics and director of the research institute at The Ethics and Religious Liberty Commission. His work has been featured at Christianity Today, The Week, Slate, Politico, The Gospel Coalition, and Desiring God. Jason is the author of several books, including his latest Following Jesus in a Digital Age with B&H Publishing.

Misinformation/Fake News

One of the most subtle and deleterious effects of technology today is how our society perceives truth and how the information overload we face each day is causing all of us to lose a grip on reality. This isn’t an isolated occurrence but has become a cultural practice across political, social, and even religious grounds. While this debate is endlessly complex, one of the most countercultural things we can do in the midst of information overload is to simply say, “I don’t know.” Conversations about these problems will only grow in the coming year as our society awakens to the fact that misinformation and fake news have real-world consequences.

 

Digital Surveillance and Data Privacy

For Christians, a right to privacy is not derived from the moral autonomy of the individual, as in many non-Christian ethical theories, but from the dignity of all people. One of the functions of privacy in this world is a way to care for the vulnerable among us and uphold their dignity as image-bearers in a technologically-rich society. As we see each day, however, data and information can and will be used, abused, and manipulated toward selfish ends because of the prevailing nature of sin in the world. In 2022, we may see some more movement from local, state, and federal governments to address these important issues of data collection, personal privacy, and the use of this information by private and public actors alike.

 

Digital Authoritarianism

One of the clearest examples of digital authoritarianism is seen in the continued genocide of the Uyghur people in China under the repressive Chinese Communist Party (CCP). Technology is one of the most powerful tools the CCP has in its arsenal to control and manipulate others. But this heavy hand of authoritarianism isn’t limited to the CCP. Nations around the world have shown that they will use any means necessary to limit access to information, suppress free expression, and cut people off from the outside world altogether. In recent years, we have seen this take place in Iran, Russia, Belarus, and most recently Cuba. As we move into 2022, it is clear that digital authoritarianism is becoming commonplace around the world and will only continue to rise as technologies become more accurate and accessible to those bent on suppressing human rights and religious freedom in order to maintain position or power over others.

 

David Zvi Kalman

Dr. David Zvi Kalman is a Research Fellow with AI&F and Scholar in Residence and Director of New Media at Shalom Hartman Institute of North America, where he was also a member of the inaugural cohort of North American David Hartman Center Fellows. David Zvi leads the Kogod Research Center’s research seminar on Judaism and the Natural World.

AI is currently developing at breakneck speed, and use cases are increasing by the day. While I do not know if there will be any major technological breakthroughs for the field this year, refinement of existing AI tools is itself enough to be quite worrying.

 

Ubiquitous Availability of Machine Learning Algorithms

One thing I am particularly concerned about is the increasing availability of powerful machine learning algorithms to anyone, anywhere, without any need for technical knowledge. Case in point: face-swapping tools allow a person to “insert” someone into a pornographic scene without their consent. When such tools first became available in 2018 there was public outrage, which led some of the developers to pull their products from the market and Reddit shutting down a forum in which such images were being shared. Unfortunately, this was not the end of things. Since then, similar software has been developed and made available to the public, and this technology, whose primary use case is likely domestic abuse, is now widely available. The rise of user-friendly AI tools threatens to make AI a problem not just on the state level, but on the level of personal relationships.

 

Better Voicing What’s at Risk with AI Technologies

It is by now quite clear that AI’s development is outpacing Ameri-society’s ability to regulate it. There is an urgent need for better policies, but it is hard for the public to speak out about technologies which they do not fully understand and about which they have not formed positions. My hope for 2022 is that public figures, including religious leaders, learn how to be more proactive in shaping public opinion on AI policy and to articulate better why careful use of this set of technologies is so important.

 

Shanen Boetcher

AI&F Research Fellow Shanen Boettcher recently completed and submitted his PhD thesis at the University of St. Andrews in Scotland.  His graduate study follows a 25 year career  in technology, concluding as General Manager of Product Management at Microsoft.  Shanen’s PhD research studies the role that artificial intelligence technology plays in the relationship between spiritual/religious information and spiritual/religious knowledge among people living in the Pacific Northwest and led off the July 18, 2021 New York Times article about faith perspectives on AI that featured a number of our experts.

 

Using faith as a platform for Misinformation

Faith is being used as a tool to build trust, gain followership and spread misinformation. What are faith organizations doing to combat this? What are technology companies doing?

https://www.technologyreview.com/2021/09/16/1035851/facebook-troll-farms-report-us-2020-election/

 

Faith in the Metaverse

How will faith be represented in the metaverse and who is deciding?

https://podcasts.apple.com/us/podcast/cnlp-470-d-j-soto-and-nona-jones-an-introduction/id912753163?i=1000548189980

 

When VC comes to Faith Apps

There has been a recent surge of venture investment in religious tech. How do Silicon Valley business models align with religious practice and mission?

https://www.wsj.com/articles/religion-apps-attract-wave-of-venture-investment-11640088001

 

Catherine Ballantyne

AI&F Research Fellow Catherine Ballantyne is an AI researcher, technologist, engineer, device physicist, designer, and also a decades-long participant in The Church of Jesus Christ of Latter-Day Saints. Catherine has been a participant observer in Stanford’s Hoover Institution and Harvard’s Berkman Klein Center’s think tank outreach/opinion shaping programs, and is presently engaged in a cybersecurity master’s program at Stanford.

Policymakers, tech companies, and researchers are all grappling with how best to address the cold hard truth that every algorithm, whether it’s dictating the contents of a social media feed or deciding whether someone can secure a loan, may result in real-world impacts and holds the potential to harm as much as help.

What should we keep an eye on in 2022? In connection with seven others who live and breathe AI every day, here are my top 5 picks:

More up-front Systemic Thinking

More up-front systemic thinking at the gate with “build it and they will come” replaced by “what if we build it this way”. Ironically, expect AI platforms to become more human centered. AI builders are moving sooner in the design processes toward deeper consideration of the impacted human user. Sooner and more frequently, creators are beginning with the ends in mind. In theory, human driven impacts considered at the gate would include due consideration of trust, explainability, and fairness and assembling systems with visibility all the way through the informational supply chain, from AI output back through data origins, examining data collection, collection-with-consent, privacy, and tools that enabling “seeing’ how models are trained.

More User Choice

Increased user choice, specifically offering individuals increased agency, control, understanding, and transparency over the computational systems that govern their experience. This is a complicated problem with links to transparency in algorithmic design. Of all the things to watch in 2022, this will have the greatest impact on the existing power dynamic between platforms and the public.

Increased Resourcing Pains

The task of operationalizing ethical and regulatory requirements requires rare interdisciplinary skill sets at the intersection of coding and social sciences that are increasingly difficult to land. And it’s not just the human resourcing.  Increasing algorithmic complexity means higher build costs.

More regulatory policy

More regulatory policy designed to tame AI’s wildest imperfections. Expect to see increasing willingness across all sectors to engage with crafting the kind of policy and regulatory solutions demanded of ethical technologists.

Greater Focus on Standardization and Accountability

More frequent and perhaps more heated conversations regarding audit standardization and accountability with an eye toward challenging what we’re being told by the entities building AI encoded platforms. Related to this will be a more honed focus on how to resolve algorithmic auditoring incongruencies. Both are ripe policy areas with close-to-ready regulations on the radar in Europe, North America, India, Vietnam, and China.

Selected Sources from Catherine:

https://pub.towardsai.net/this-microsoft-neural-network-can-create-poetry-from-images-2c47bd2a35d1

https://algorithmwatch.org/en/

“The Rise and Fall of Great Technologies and Powers” talk by Jeffrey Ding, 1/19/22

Stanford Institute for Human-Centered Artificial Intelligence  |  To subscribe to these events => https://hai.stanford.edu/subscribe-hai-mailing-list

Webinar => “Strengthening the Technical Foundations of U.S. Security”, 1/20/22

Joint seminar with Georgetown’s Center for Security and Emerging Technology (CSET) by Stanford HAI Director of Policy Russell Wald, CSET Senior Fellow Andrew Lohn, and Stanford HAI Postdoctoral Fellow Jeff Ding. How a National Research Cloud will impact U.S. national security.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter