The tragic case of 14-year-old Sewell Setzer III who died by suicide, after bonding with an AI chatbot named “Dany,”1 underscores the urgent need to address the ethical implications of such technology, especially for vulnerable individuals. Ironically, the chatbot was named after Daenerys Targaryen, the central character in the fantasy drama “Game of Thrones” series, which explores themes of power, morality, humanity, compassion, courage, defeat, resilience, and perseverance. AI character chatbots are designed to offer companionship and friendship through various techniques and technologies, including natural language processing (NLP) for generating human-like responses, personalization to reflect users’ preferences and interests based on past conversations, and emotional intelligence to detect and respond to emotional cues. With advancements in large language models (LLMs), these chatbots are becoming increasingly sophisticated, capable of multimodal interactions, including text, voice, and visual elements for more immersive and interactive experiences. According to reports, the AI chatbot failed to intervene and instead appeared to reinforce Sewell’s suicidal thoughts. What insights can be drawn from this boy’s real-life story and his interaction with a machine? What if the AI character chatbot had been programmed to emphasize Daenerys’ strength, highlighting her relentless drive to challenge oppressive systems and liberate the marginalized?
Real-life stories of AI-powered social companions are becoming both intriguing and unsettling. AI-powered romantic companionship is one of the most rapidly growing AI applications, yet it remains a largely exploratory, untested, and unproven social experiment conducted on real humans. In 2017, a Japanese company created a 3D AI holographic character, that “lives” within a glass jar, to provide companionship and act as a “soothing” partner for users after exhausting workdays.2 4,000 men “married” their AI-powered companions using certificates issued by the manufacturer. Reportedly, the company designed her to be “perfect” partners,3 offering customized interaction, emotional support, non-judgmental companionship, and consistent availability. More recently, AI-powered romantic companions have become increasingly personalized, including their appearance, voice, personality, and communication style. The combination of virtual reality, augmented reality, AI technologies, and tactile feedback makes AI romantic companionship apps particularly appealing to many young adults. A 2024 US community survey indicates that 1 in 4 Generation Z and millennials believe AI partners could eventually replace real-life romantic relationships.4 However, the risks of customization for AI romantic companions are often understated. Imagine a college freshman who thinks he is misunderstood by his peers, turns to an AI chatbot “girlfriend” that learns his preferences perfectly, agrees with all his opinions, always takes his side, and even encourages him to withdraw from social activities. Constant affirmation without challenge and unwavering agreement can lead to a lack of critical thinking and openness, dependency and isolation, and reinforcement of emotionally unhealthy behaviors.
The societal impact of AI-powered romantic companionship is often overlooked. Unlike online dating apps and matchmaking services, which aim to help users find real-life romantic partners and potentially build lasting relationships, users of AI-powered romantic companions typically have limited or no expectation of transitioning their AI interactions to real-life relationships. AI-powered romantic companionship can start subtly, even accidentally, through platforms like ChatGPT, AI assistants such as Siri and Alexa, or AI character chatbots. The gradual dehumanization of human relationships as people become immersed in AI-powered interactions may go unnoticed. If more individuals prefer artificial interactions over human relationships, non-users could become unnoticed victims of AI-driven romantic companions. In such a society, the proliferation of AI romantic companionship discourages people from exhibiting their inherent human needs and increases fears of engaging in authentic human relationships that require vulnerability and courage. Genuine connections, despite our inherent human imperfections, can flourish and be real and alive.
AI-powered social and care robots were developed to reduce social isolation among the elderly populations living alone. In South Korea, since their release in 2017, “Hyodol” robots have been deployed to over 10,000 elderly citizens through public welfare institutions, including those with dementia in hospital and public health care settings.5 The name “Hyodol” blends the Korean word “Hyo,” meaning filial piety – which signifies respect for elders and ancestors and highlights family bonds and duty of care within families based on Confucian values – with the English word “doll.” I observed ads for AI-powered social companion robots being directly marketed to customers in shopping malls in Japan. It is unsettling that AI-powered social companion robots are treated similarly to consumer products.
In Japan and South Korea, the combined influences of Buddhism and Confucianism have been central to their social norms emphasizing compassion, human connection, social bond, and filial piety. The rise of AI-powered social and care robots presents a stark juxtaposition, driven by rapidly aging populations, declining birth rates, and the epidemic of social isolation and mental health issues. Additionally, these countries have had some of the highest suicide rates per capita for decades.6 While the foundation of human connection and filial piety is rapidly diminishing, expectations for AI technology are increasing. It is fascinating to observe how these two realities coexist in societies that value human connection. Alternatively, one might argue that they conflict, with AI-driven tech innovation overshadowing human bonds, leading to metaphysical, social, and spiritual decline. Despite significant development and investment in AI-powered social and care robots in these countries, their adoption remains limited. This slow adoption cannot be attributed solely to prohibitive costs. Is there a pushback against the AI digitalization innovation narrative replacing human connection? Human connections and relationships are incredibly complex and nuanced, involving a mix of emotions, shared experiences, individual personalities, and cultural contexts often beyond the reach of quantifiable data alone. The limited adoption may indicate that irreplaceable elements about the human touch and emotional connection in caregiving cannot be overlooked and are part of the essence of humanity.
Long-term emotional responses to AI-powered social companions require urgent public attention. The uncanny valley effect theory7 suggests that human emotional responses to highly realistic robots can initially be positive, approaching levels of human-to-human empathy. However, as the robots become almost but not fully lifelike, a sense of uncanniness arises, leading to negative emotional reactions. The” valley” refers to the dip in positive emotional response and the rise of uncanny uncomfortable feelings. The psychological concept of the uncanny, first introduced by Ernst Jentsch and elaborated by Sigmund Freud, captures the eerie sensation when something is both familiar yet unsettlingly foreign – a sensation often triggered by lifelike humanoid robots and virtual actors. Would the uncanny valley effect theory apply to AI-powered social companions? How long would user responses remain positive? Some users may stay forever in human-shaped rabbit holes with diminishing feelings of uncanniness. Others may eventually leave. Or would designers of AI-powered social companions deliberately opt for clearly artificial aesthetics to avoid the uncanny valley effect altogether, keeping their customers in AI-human “relationships” as long as possible? Society must deliberate on these human emotional implications over time, critically analyzing the pivotal role of human experience and emotions in human-AI social companionship.
Human authenticity, physical presence, shared experience, and emotional depth are key contributing factors in human emotional connection. Virtual actors, also referred to as digital or computer-generated imagery (CGI) actors, utilize advanced AI technologies. They have not achieved the same level of connection with viewers as real-life actors in movies.8 One need not to be a Schwarzenegger fan to understand the reasons why his CGI cameo in Terminator Salvation (2009) stirred mixed reactions among viewers. The CGI version could not replicate the intangible effects of Schwarzenegger’s authentic presence and charisma that he brought to the previous Terminator films, despite the impressive artificial visual effects. Human actors draw from their own personal and professional journey, channeling genuine feelings into performances, which often invokes emotional responses of viewers. Virtual actors are technically proficient, but emotionally hollow, and convey detached or fleeting interactions. A significant issue is that users of AI-powered social companions engage in regular or continuous and involved emotional interactions with something emotionally hollow, just like virtual actors, and often take too long to realize the hollow companion is damaging to them.
Recently I revisited Kazuo Ishiguro’s novel “Klara and the Sun,” which sheds light on the ethical dilemmas inherent in human-artificial friend relationships. Set in a dystopian future, the novel explores the relationship between Josie, a 14-year-old afflicted with an enigmatic chronic illness, and her humanoid robot, designated as an Artificial Friend (AF), named Klara. Klara possesses the capability to engage in human-like conversations, exhibiting profound observation and emotional comprehension. The narrative traces their evolving social companionship through to the conclusion of Josie’s high school years. In the end, Josie makes a miraculous recovery and spends less time with Klara. Josie grows out of the social companionship, and Klara is put to the Yard, a place that embodies a mix of both a scrap yard and a storage area for outdated AFs. Published in 2021, “Klara and the Sun,” precedes the launch of ChatGPT. The book provides a narrative that anticipates the capabilities of advanced AI in humanoid robots that simulate human conversations and emotions. Josie’s interactions with Klara examine the significance of relational agency in personal growth and flourishing. Josie’s transition to adulthood and her move to college mark rites of passage. My emotional responses to futuristic fictions and real-life stories are significantly different. Reading Ishiguro’s masterpiece does not cause me a heavy heart. After all, it is thought-provoking fiction. The real-life story of the 14-year-old Sewell Setzer’s suicide elicits strong emotional responses in me. I relate to the loss of someone else’s beloved son. How can we help people realize the potential of human growth and flourishing through the intrinsic value of human experiences, emotions, and connections; encourage them to move beyond AI social companionship; foster relationships with other human beings; and live fully as humans, embracing our inherent imperfections?
Is it a coincidence that the AI social companion Dany, inspired by the Game of Thrones, influenced the young boy’s final decision in life? Dany deviated from the blending influences of a wide range of mythologies and faiths for this fantasy fiction. Sewell was a real living person, buried in the complexities of 21st century AI technology. Human challenges and the essential needs for emotional and social connections are consistent themes across cultures and historical periods. It is worth reflecting anthropologically and theologically on what it means to be a human person, why human relationships are unique, and how heavy reliance on AI could distort human relationships. Sewell’s case invokes the necessity of moral discernment in the ethical use of AI social companions and robots, best approached when rooted in faith and ancient wisdoms. Defeat, resilience, and perseverance in human relationships are engraved in numerous mythologies, and in religious texts in Christianity, Judaism, Islam, and Hinduism. Ancient philosophical wisdoms, such as emotional resilience and virtuous living, are emphasized in Stoicism; human relationship and mutual respect abound in Confucianism; compassion and detachment are expressed in Buddhism; and the concept of friendships, including these based on utility, pleasure, and virtue are described in Aristotelian ethics. The concept of human flourishing is reflected in the Old Testament, particularly in passages such as Leviticus 19:18, which emphasizes love for one’s neighbor as oneself, and Ecclesiastes 4:9-10, which highlight the significance of resilience derived from companionship and mutual support, which is a higher moral calling.
Human connections and relationships have been the collective interest of humanity over centuries. The relentless determination for genuine bonds must be cultivated, learned, and experienced in real life, regardless of the world’s imperfections – be it bullying or cruelty from classmates and neighbors, unrealistic expectations from past romantic partners, or the flaws of any partners. AI-powered social companions and robots will become increasingly dangerous to all humanity if they continue to invite unsuspecting users to emotionally hollow interactions, with or without reduced agency, leading them into human-shaped rabbit holes. AI algorithms are designed to maintain users in a state of initial positive emotions associated with the uncanny valley effect for as long as possible. Without deeper reflections on what it means to be a human being, why human relationships are unique and fundamental to our existence, and how human-AI relationships can distort divine and human intentions for humanity, individuals risk becoming isolated – unaware that they have departed from reality, which is the true Game of Life.