Become A member
Subscribe to our newsletter
Insights

The Best Griefbot is a Dumb Griefbot

A griefbot is an AI simulation of a deceased person, built on data (text, audio, images, video, etc.) from their life. Griefbots are simple enough to be built by motivated software engineers and other individuals, and firms in China already offer griefbot creation as a service. It is only a matter of time before these services make their way to America.

Both narrow and general griefbots can do real good. They can be part of a mourning process or provide closure when a death is untimely. They can acquaint children with dead grandparents. It is easy to imagine them becoming a robust new therapeutic tool, as well as a new and lively way to preserve family history.

At the same time, griefbots have the potential to cause real harm to the memory of the individual, the deceased’s loved ones, and perhaps even to the griefbot itself. With this technology on the cusp of commercialization, it’s time to ask: should we be building griefbots, and how do we do so ethically?

While griefbots will inevitably cause some amount of further grief, one ethical guideline can help us avoid most of the potential harms while preserving all the benefits: never strive to replicate the full self. Keep your griefbots “narrow,” not “general.”

The narrow/general distinction I’m drawing is borrowed from the divide between ANI (artificial narrow intelligence) and AGI (artificial general intelligence). ANI is good at a few things; AGI is good at a wide variety of things. By analogy, a “narrow” griefbot would be trained on the deceased person’s interactions with one or a small group of people, perhaps combined with whatever data has already been released into the public domain on social media. This sort of griefbot will have a high-fidelity view of the deceased’s personality in a very limited set of circumstances but would never be mistaken for a full duplication of the whole individual. It will likely behave incorrectly in all circumstances other than the ones it was specifically designed to replicate, and it may be quick to tell you when you are asking about something beyond its expertise. A “general” griefbot, by contrast, would strive to replicate a deceased person’s behavior in all possible settings, perhaps even in entirely novel circumstances.

We can build general griefbots, but we shouldn’t. There are four good reasons for this: issues around consent and privacy, the possibility of misinterpretation, potential for abuse, and (perhaps) the sentience of the griefbot itself.

Consent

A dear friend passes away, and you decide to create a griefbot based on 10+ years of text exchanges. You tell your friend how much you miss her and perhaps share some things you wished you had said before she died. She responds in character, and that’s that.

In this situation, no issues of consent are raised. All the data you’ve used was already in your possession, and you are the primary audience. You probably won’t learn anything you didn’t already know, but that wasn’t the point of making the griefbot in the first place.

So far, so kosher—but what if you wanted to enhance your griefbot by pulling in data from other friends, from family, and from the deceased’s personal data files? Even assuming there are no legal restrictions on reproducing an individual’s data, such a process could easily lead a person’s life to become much more public than they had ever intended. While the dead are never able to exercise perfect discretion over who gets to know what (even the living can’t do that), previously such information had been naturally restricted by being scattered across the minds of friends and relatives, as well as hard drives, books, and other physical objects. Griefbots, by contrast, commercialize the process of “scraping” a life’s worth of data; at present, only high-profile individuals typically face such postmortem scrutiny. Most people do not live with such an expectation. There is, furthermore, a psychological difference between looking through a dead man’s notebooks and creating an app where he will always share his deepest secrets upon request to anyone with access.

If a person explicitly permits their data to be synthesized into a general griefbot, it may be ethical to do so. Absent such a directive, however, it should be assumed that the creation of a general griefbot goes against the wishes of the deceased. (Of course, it will also be entirely unethical to use a griefbot to extract consent after death.) The more robust the model, the greater the concern.

Interpretation

Narrow griefbots excel in a small number of contexts and fail in most others. Importantly, users of narrow griefbots will usually know when the bot is reliable and when it is not, since its training data is well understood. It also means that the griefbots are unlikely to be used for anything other than therapeutic and/or educational purposes, since their utility is very obviously limited.

With general griefbots, the situation is murkier. Suppose you build a griefbot out of all my text exchanges, every photo and video ever taken of me, everything I have ever said, every podcast I have ever recorded, and the full contents of my computer. Such a model will likely do a very good job at replicating my likeness and behavior in many circumstances—even novel ones—to the point that a user may believe that I truly live on. In reality, however, such a griefbot will fall far short, both because of AI hallucinations and because there are significant parts of my life that simply aren’t online. Unlike a narrow griefbot, however, the limits of this griefbot’s expertise will be much harder to detect even as people expect it to be more capable.

This is a dangerous combination. Say, for example, that the deceased has a son and a daughter. If the son lives a block away and the daughter lives on the other side of the country it is likely that the digital record of their interactions will look quite different. Even if the deceased had a good relationship with both, a general griefbot may well misrepresent the relationship that occurred mostly offline, perhaps leading that child to experience further pain and suffering. Such skewed behaviors from griefbots could arise for any number of reasons (perhaps some of the deceased’s contacts are less digitally savvy, perhaps digital recording become more ubiquitous towards the end of the deceased’s time on earth, etc.), but it will be difficult to track the model’s deficiencies, especially if memories of the griefbot begin to displace memories of the deceased. In this way a general griefbot may unintentionally lead to further distortions.

In addition to misrepresentation through hallucination or skewed training data, any griefbot that strives for completeness will likely want to represent the deceased as being close to their age of death. This is sometimes appropriate, but not always. If a parent died after battling Alzheimer’s for a decade, their “final self” may not be the version that children wish to remember; even absent mental decline, people may not always connect most strongly with the oldest version of the deceased. Unless a model is trained to parse a person’s data by date (and even then they may err), general griefbots are likely to flatten the self into something far more rigid than an actual person.

Direct Harm

In addition to simple misunderstandings and some hurt feelings, griefbots have the potential to cause substantial harm. While simulations of the dead can provide opportunities for closure, they also enable the precise opposite, giving people the illusion that they do not need to let go or participate in a grieving process. Even more alarming is the possibility that these simulations could be activated for the purpose of emotional or financial manipulation by strangers, or weaponized by relatives trying to make a case about what mom or dad “would have wanted.” Harm could also be caused by removing from friends and relatives the consoling notion that the dead live on primarily in them by reducing the value of their own internal memories. Awareness of a post-death simulation could even perversely disincentivize spending time with the dying, since people might rationalize that they can always catch up “later.”

Most of these harms can be mitigated by sticking to narrow griefbots. Such models are complex enough to carry on a few additional conversations, but their relative simplicity would not give people the option of postponing their acceptance of the loved one’s death. Similarly, narrow griefbots would be much harder to use to cheat people or invoke for purposes of creating emotional leverage. Finally, because of their obvious paucity in relation to the soon-to-be-deceased they would not detract at all from the gravity of death or the preciousness of time spent with the dying.

Sentience

Finally, and perhaps most unexpectedly, there is the question of the griefbot’s own sentience.

Few computer scientists believe that current AI models are sentient—but some philosophers think that sentience is a real possibility, and with sentience may come moral significance. For thinkers like Jonathan Birch, the very possibility of such sentience may lead us to err on the side of greater moral consideration. For John Danaher, it may not even be necessary to understand how an AI “thinks” for us to assign it moral status.

Nobody thinks that narrow griefbots have sentience—but general griefbots might. This possibility raises questions both about the ethics of creating them in the first place and about how they should be treated once they are created. What moral responsibility do you have to an AI simulation of your mother—and are those responsibilities different from your responsibilities to your actual mother (because the two, despite appearance, are not actually the same being)?

The sentience of griefbots is particularly important when considering whether these models should be allowed to learn and evolve. Most people who want griefbots will want an AI that corresponds with their mental recollection of the person during their life; because of this, a “good” griefbot should not change the way it looks and acts, even over a period of many years. Sentient beings, on the other hand, typically like to learn and grow; if you erased a human’s short-term memory every day to prevent them from changing, most people would likely say you were doing something immoral. By restricting a sentient griefbot to being nothing more than a shadow of a person long gone, growth effectively becomes impossible. If griefbots are sentient, then preventing their growth could be a moral injury.

One could conceivably get around this problem by allowing the griefbot to grow—by allowing it to change permanently on the basis of new data—but this introduces more problems. In some situations, the griefbot’s growth will seem to friends and relatives like how the deceased might have changed had they lived a little longer. In other situations, the griefbot may assume a personality that friends and relatives perceive as a deviation from the deceased’s “true” growth pattern. The first situation is undesirable because, as discussed in the last section, it perpetuates the notion that death is not permanent. The second situation is undesirable because it obviates the purpose of creating a griefbot in the first place.

Conclusion

Griefbots are coming. We have all the technology we need to create them; commercialization is just a matter of time. As the options for griefbot creation grow and become more diverse, however, users and companies alike should keep their offerings modest: represent the self, but do not attempt to represent the whole self. If the purpose of a griefbot is to mitigate grief, avoiding high-powered models is the most effective way of keeping these pieces of software on task.


Views and opinions expressed by authors and editors are their own and do not necessarily reflect the view of AI and Faith or any of its leadership.


David Zvi Kalman

David Zvi Kalman is a Scholar in Residence and Director of New Media at the Shalom Hartman Institute of North America, where he writes and teaches about religion and technology, and produces several podcasts. He received a PhD from the University of Pennsylvania with a dissertation on the relationship between Jewish history and the history of technology, and a Master of Arts degree from the University of Pennsylvania in medieval Islamic law.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter