We humans are obsessed with creating AI with human capabilities. As contemporary development of AI focuses heavily on creating solutions with intelligence and autonomy, we can point to many AI use-cases that are steadily replicating human-level performance. Routine tasks are being automated (robotic process automation – arguably the easiest to adopt area of AI with low/no-code solutions offering immediate cost/time savings), machines can synthesize words in speech (resembling a single human sense), and creative tools can convert text to images and videos (multi-modal, combining multiple human senses). The impacts of such solutions stretch across wide horizontals and thus the social value is compelling. It is understandable that this is a valuable pursuit to people.
But beyond these functions, can AI help us reach higher levels of aspiration? Is performing at the human-level enough? Amazon certainly hoped so when its hiring teams used algorithms to select job applicants. Betterment hoped so too, in aiming to supplant wealth managers with its robo-advisors. But there have been downsides in each case. Amazon’s algorithm discriminated against applicants when it accurately replicated years of human behavior. Betterment’s risk-weighted statistical precision neglected the need for creating trust among the humans in investment decision making.
The issue is that we humans are fallible, and so we transfer flaws into the systems we build. So, instead of trying to build systems that replicate human behavior, we should look to design systems that will allow us to make more thoughtful decisions about what is best for humanity. Put simply, AI at its best would help us be more human. We spend so much time fretting about AI becoming self-aware, perhaps activating like Skynet; but what if AI helped humans become more self-aware? In my news consumption, what if algorithms could depict where my activities are centering, and instead mixed in more diverse content to consider? In national defense, what if it gave us additional perspectives about short-term wins that might create long-term wars, as feared by J. Robert Oppenheimer? In food selection at the grocery store, can an algorithm highlight the distribution of profit from my dollar across the value chain, from CEO to farmer to shareholders from various income classes? Each of these scenarios goes beyond all responsible AI frameworks today.
Let us be clear – I am not saying we should let the machines make all these decisions for us. Humans rightly act on biases. You and I groom them daily – music preferences we believe we like, personal budgets we shape, public leaders we follow. This grooming often occurs in innately challenging marketplaces vying for our attention. Human freedom gives us choice. But when an algorithm makes that struggle automatic, we take issue. Preferences become mindless. We get lost in an echo chamber custom-tailored to our blindness. We exclaim we’re being duped in a social dilemma. When AI acts exactly as we have, perspectives become entrenched, creating division and pain. It raises the paradox of hell being a place where you get everything you want. Instead, humans should partner with these systems to make more informed decisions regarding matters of social value.
This developing partnership between humans and AI is both frightening and exhilarating. Designed and guided well, these human-machine interactions can propose more equitable forums for participation, give more consideration for overlooked groups and the value only they have for the collective, unlock more informed decision making at local levels, and bring more dignity into the workplace.1 AI making us more human with stronger aspirations is possible.
Action Steps For You
We will dive into more details on some of these points in future AI & Faith writings. For now, what problems of social value are you working with in your communities, family, and personal life? I invite you to unload into any generative AI site (ChatGPT, Bard, Bing Chat, etc) a top-of-heart question from relationships or social concerns nearest to you, and press the AI in your dialogue to help you imagine new possibilities. In these interactions, AI can help you go beyond past human performance and embrace a greater human aspiration.
This article is the first part of a series of articles concerning the partnership of humans with AIs for the sake of helping us achieve higher levels of aspiration. In later articles I will address some important questions that arise in this pursuit, for example, ‘Who is responsible for putting aspiration into an algorithm?’ and ‘Who is responsible for overriding a model that successfully replicates past human performance?’. I will also consider the plethora of domains that this partnership could positively effect, ranging from healthcare and education to consumer choices and relationships.