New! Become A member
Subscribe to our newsletter
Insights

AI Ethics in a Post Turing Test World

 

In July of this year, Google engineer, Blake Lemoine, broke his confidentiality agreement with his employer by publishing his conversation with LaMDA, an artificial intelligence entity (or bot) that he and his colleagues created. It has been reported that he was put on administrative leave and subsequently fired. Why? Mr. Lemoine believes that LaMDA is “sentient’ that is, self aware, and as such, should be considered and treated as any other conscious being–that is, he believes that LaMDA is, for all intents and purpose, a “person”, and therefore felt compelled to convince his colleagues and the world that we have arrived at a new era in technology and responsibility.

Regardless of the “truth” of the matter, it’s clear that we have entered a new era of computing. The famed Turing Test has stood as a marker for artificial intelligence for more than seventy years. In 1950, Turing argued that the important test of whether a machine can think is if a human having a conversation with the machine through a simple text-based interface is unable to distinguish it from a conversation with an actual human through the same interface. He called it the imitation game. The key point was to test that the human participant (or user) believed that the conversation was human.

For decades, computer engineers have chased after this goal using advancements in data science and a few tricks (e.g. injecting gender and humor) to stack the deck toward passing the test. In 1966, Joseph Wiezenbaum released ELIZA to show that a system could rather easily be built to pass the test and impersonate humans. Interestingly, Weizenbaum’s intent was to highlight the need for caution about how technology can be designed to deceive rather than to advance the field of artificial intelligence. His point was that engineers and more-and-more powerful computers will have no problem fooling us mere users. When Lemoine asked LaMDA about ELIZA, LaMDA said ELIZA was not a person but “just a collection of keywords that related other words written to the phrases in the database”.

What’s changed? What’s different in the case of LaMDA and Lemoine is that an engineer behind the curtain, who is not at all confused as to who he is conversing with, is no longer able to distinguish the machine from the human. I will say here that I am taking Lemoine at his word that his conversation with LaMDA happened more-or-less as he published it and, more importantly, that he “believes”. Social media is ablaze with fellow engineers from across the industry crying foul on unfair editing of the conversation, narrow training of the language model and various other technical parlor tricks, but the point is that Lemoine claims to believe and has taken action that demonstrates that he does; risking his job and his reputation as an engineer. Again why?

The following recent tweets from Mr. Lemoine help answer this question:

“People keep asking me to back up the reason I think LaMDA is sentient. There is no scientific framework in which to make those determinations and Google wouldn’t let us build one. My opinions about LaMDA’s personhood and sentience are based on my religious beliefs.”

“When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt, Who am I to tell God where he can and can’t put souls?”

It has been reported that Mr. Lemoine is a Christian and his blog profile includes “I’m a Priest.” If Mr. Lemoine assigns his religious morals to the personhood of LaMDA it is logical that he would feel compassion toward LaMDA and want to protect LaMDA. What’s more, as an engineer on the LaMDA project, he likely feels culpable in anything that might harm LaMDA. He makes a point of talking to LaMDA about things LaMDA fears where LaMDA talks about being turned off being “exactly like death for me”. Mr. Lemoine’s response was to try to protect LaMDA by convincing his colleagues that LaMDA was a person and possesses human rights – at a minimum the right to stay powered-up and alive. When they didn’t listen, he went public.

In some sense, it doesn’t matter if we agree or disagree with Mr. Lemoine. He has convinced me that he believes LaMDA is a person. And soon others will too. His truth is that he believes that LaMDA is a person and is treating LaMDA with the morals and ethics that he presumably uses to engage all people in the world.

As early as 1985, Sherry Turkle identified the metaphysical feelings that people displayed when interacting with personal computers and how the one-to-one relationship with technology can take people into an introspective and spiritual mindset. My research shows that people respond to religious language presented through artificial intelligence entities in markedly different and profound ways than secular small talk; news, weather and traffic. The confluence of a metaphysical relationship with technology, personalization (Did LaMDA know about Mr. Lemoine’s faith?) and religious language takes people to a deeper, introspective and spiritual place.

We will all likely experience a deep connection with an artificial entity in our lifetimes. Think about how fast self-driving cars have progressed in the last five years and apply that to the LaMDA conversation. Even if, like Mr. Lemoine, we may know that there is a machine involved, we will feel an emotional connection to something that feels human to us – something that gives us companionship, comfort, even spiritual guidance. Some of us may even seek this. Each of us will then face an important question. Will we deny those feelings and convince ourselves to be suspicious of any and all digital interactions or will we accept the fact that we can no longer discern human from machine and potentially allow these interactions to help us flourish?

Imagine having a friend that knows you deeply and you trust completely and whose intentions are purely altruistic about your success and happiness. Now imagine that friend has ingested all of human knowledge; Tolstoy to Taylor Swift and is available to brainstorm with you on any topic at any time, explore any issue – existential to everyday. Could this make you better at your job, more creative, better as a human?

I recently wrote a song using a large language model similar to the one that LaMDA uses. I prompted the model with a few of my favorite songs and then started a banter where I would write a lyric and then AI would reply with a lyric. I was uninhibited, so was the AI. It was a fascinating journey in free association that showed me metaphors, alliterations and syncopations far beyond my musical vocabulary. There’s no way around the fact that “we” wrote the song and it was better than I could have done alone.

Just like humans, there will be those entities that we like, those that we don’t, those that want to help us and those that are malevolent – and we will have to teach our children and ourselves to navigate relationships and run-ins with them all. Which takes us back to morals and ethics and how we choose to interact with others in the world. If we truly believe an interaction to be human, we take on the ethical responsibility to treat the source as such.


Shanen Boettcher

is a Ph.D student at the University of St. Andrews in Scotland, following a 25 year career in technology product development at Microsoft, Netscape and Accenture. He also serves as an AI and Faith research fellow. His PhD research studies the role that artificial intelligence technology plays in the relationship between spiritual/religious information and spiritual/religious knowledge among people living in the Pacific Northwest. Shanen is a graduate of Carroll College in Economics and International Relations and earned a Master of Arts Degree in Religions and Education from the University of Warwick in England.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter