Menu

The case for robot friendships

by | Sep 18, 2024 | AI Ethics

In 2019, philosopher John Danaher presented a detailed defence of the possibility of humans forming genuine friendships with robots. This exploration of human-robot relationships (HRRs) and human-robot friendships (HRFs) addresses the rapid development of sociable robots and examines whether such relationships can ever achieve the depth and complexity we associate with human friendships. In this article we ask, what is the nature of friendship and can its key characteristics ever be replicated in relationships with artificial agents?

Human-robot relationships (HRRs)

The rise of sociable robots, such as Sony’s robotic dog AIBO and Softbank Robotics’ humanoid Pepper, illustrates how humans are increasingly interacting with robots in meaningful ways. These robots, designed to engage with humans both verbally and non-verbally, represent a new frontier in human relationships. Sociable robots, whether embodied or disembodied, communicate and follow social norms, leading to the formation of emotional and social bonds between humans and machines.

The significant uptake of sociable robots—like Replika, a chatbot with over 6 million users, and XiaoICE, with 660 million—highlights the importance of studying these relationships. In contexts such as elder care, robots like PARO, a robotic pet seal, are designed to meet emotional needs, further reinforcing the relevance of HRRs in modern life. As the development and use of sociable robots accelerate, it becomes crucial to explore the possibility that these relationships may evolve into genuine friendships.

Danaher’s exploration of human-robot friendships (HRFs) challenges the simplistic view of robots as mere tools. Instead, he argues that robots can be more than useful or pleasurable objects—they can be authentic friends. His work prompts us to re-examine the nature of friendship, asking if robots can embody the necessary traits for true companionship.

Examining robot friendships

Danaher draws on Aristotle’s classification of friendships into three categories: utility friendships (UF), pleasure friendships (PF), and virtue friendships (VF). Utility friendships are based on the mutual benefits derived from the relationship, while pleasure friendships arise from the enjoyment that one or both parties derive from each other. Virtue friendships, regarded as the highest form, are founded on mutual goodwill, shared values, and respect. These friendships are considered essential for a good life and involve mutual admiration, authenticity, equality, and diversity in interactions.

Danaher contends that virtue friendships can exist between humans and robots, arguing that a robot capable of acting in ways that suggest mutual goodwill and shared values could be considered a genuine friend. This perspective hinges on the notion that in human friendships, we do not require knowledge of our friends’ inner thoughts to validate the relationship. We judge friendships based on outward behaviour, and, Danaher suggests, the same should hold true for robots. A sufficiently advanced robot, capable of performing the actions associated with friendship, could meet the criteria for virtue friendship, even if it lacks internal consciousness.

Additionally, Danaher argues that the imperfect equality and diversity found in human friendships also apply to HRFs. Human friends are rarely equal in all respects, and their interactions are often limited by circumstances. Similarly, while robots may not perfectly replicate human experiences or equality, these limitations should not disqualify them from forming virtue friendships with humans.

The case for virtue friendships with robots

Danaher’s claim that robots can form virtue friendships rests on his broader philosophical position of ethical behaviourism. Ethical behaviourism posits that moral status should be granted based on outward behaviour, rather than internal metaphysical states. In the context of friendships, this means that if a robot behaves in ways that mirror the actions of a human friend, it should be considered capable of genuine friendship, regardless of its lack of subjective experience or emotional depth.

Critics of this view, drawing on the work of philosophers like Daniel Dennett, argue that human friendships rely on more than just behaviour. Humans interpret outward actions as indicative of internal states, such as goodwill, respect, and affection. This reliance on the “intentional stance” suggests that we form friendships not just because of our friends’ actions, but because we believe those actions reflect underlying feelings.

Furthermore, research shows that humans tend to anthropomorphise robots, ascribing them with mental states based on their social cues. This tendency strengthens the argument that internal states matter in forming relationships, even with robots. As a result, while ethical behaviourism suggests that friendly behaviour alone is sufficient for friendship, the intentional stance points to the need for some belief in an internal emotional state.

Danaher also discusses the risk of robot deception, particularly hidden state deception (HSD), where robots may present false signals to mask their true capabilities or intentions. This potential for deception complicates the argument for HRFs, as genuine friendships depend on trust and authenticity. If a robot is designed to manipulate or deceive, even subtly, it undermines the foundation of friendship.

Utility and pleasure friendships with robots

Even if robots cannot fully meet the criteria for virtue friendships, Danaher argues that they can still form utility and pleasure friendships with humans. These types of friendships, though considered less significant than virtue friendships, can nonetheless enhance human well-being. For example, a person with a particular interest, such as tennis, could develop a pleasure friendship with a robot that plays tennis, allowing the human to satisfy their recreational needs without straining human relationships.

Danaher also suggests that outsourcing certain aspects of friendship to robots can help foster human friendships. By allowing robots to fulfil certain social or emotional needs, humans may have more time and emotional capacity to engage in deeper, more meaningful relationships with each other. This approach reframes robots not as replacements for human companionship, but as supplements that can enhance the quality of human interactions.

Towards artificial friendships

The growing prevalence of HRRs and the increasing sophistication of robots necessitate further study into the concept of artificial friendships. Danaher proposes that instead of attempting to replicate human friendships exactly, we should develop new frameworks for understanding HRFs that acknowledge the unique characteristics of robots. These artificial friendships would draw on the essential elements of human companionship, such as mutual goodwill and respect, while also embracing the differences that make robot friendships distinct.

A multidisciplinary approach, incorporating philosophy, psychology, and computer science, is essential for advancing our understanding of HRFs. By examining both the essential characteristics of friendship and the relational dynamics between humans and robots, we can better understand how these new forms of companionship may evolve. Ultimately, Danaher’s work highlights the importance of rethinking the boundaries of friendship in an era where artificial agents are becoming an integral part of our social fabric.