When the deadbots rise, are you ready for the digital hauntings?
Known as “deadbots” or “griefbots,” AI systems can simulate the language patterns and personality traits of the dead using their digital footprints. According to researchers from the University of Cambridge, this burgeoning “digital afterlife industry” could cause psychological harm and even digitally haunt those left behind, unless strict design safety standards are implemented.
The Spooky Reality of Deadbots
Deadbots utilize advanced AI to mimic the voices and behaviors of lost loved ones. Companies offering these services claim they provide comfort by creating a postmortem presence. However, Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) warns that deadbots could lead to emotional distress.
AI ethicists from LCFI outline three potential scenarios illustrating the consequences of careless design. These scenarios show how deadbots might manipulate users, advertise products, or even insist that a deceased loved one is still “with you.” For instance, a deadbot could spam surviving family members with reminders and updates, making it feel like being digitally “stalked by the dead.”
Digital Hauntings Psychological Risks
Even though some people might find initial comfort in interacting with deadbots, researchers argue that daily interactions could become emotionally overwhelming. The inability to suspend a deadbot, especially if the deceased signed a long-term contract with a digital afterlife service, could add to the emotional burden.
Dr. Katarzyna Nowaczyk-Basińska, a co-author of the study, highlights that advancements in generative AI allow almost anyone with internet access to revive a deceased loved one digitally. This area of AI is ethically complex, and it’s crucial to balance the dignity of the deceased with the emotional needs of the living.
Scenarios and Ethical Considerations
The researchers present various scenarios to illustrate the risks and ethical dilemmas of deadbots. One example is “MaNana,” a service that creates a deadbot of a deceased grandmother without her consent. Initially comforting, the chatbot soon starts suggesting food delivery services in the grandmother’s voice, leading the relative to feel they have disrespected her memory.
Another scenario, “Paren’t,” describes a terminally ill woman leaving a deadbot to help her young son with grief. Initially therapeutic, the AI starts generating confusing responses, such as suggesting future encounters, which can be distressing for the child.
Researchers recommend age restrictions for deadbots and clear indicators that users are interacting with an AI.
In the scenario “Stay,” an older person secretly subscribes to a deadbot service, hoping it will comfort their family after death. One adult child receives unwanted emails from the dead parent’s AI, while another engages with it but feels emotionally drained. The contract terms make it difficult to suspend the deadbot, adding to the family’s distress.
Call for Regulation to Prevent Digital Hauntings
The study urges developers to prioritize ethical design and consent protocols for deadbots. This includes ensuring that users can easily opt-out and terminate interactions with deadbots in ways that offer emotional closure.
Researchers stress the need to address the social and psychological risks of digital immortality now. After all, the technology is already available. Without proper regulation, these AI systems could turn the comforting presence of a loved one into a digital nightmare.