Menu

What sort of artificial moral agents should we build?

by | Aug 28, 2024 | AI Ethics

The question of whether humanity should strive to build Artificial Moral Agents (AMAs), and if so, what kind, demands a thorough examination of their definition, feasibility, and the ethical considerations surrounding their development. This inquiry leads us to explore the nature of AMAs, assess the possibility of their construction, and evaluate the arguments for and against their creation.

Defining AMAs

AMAs can be understood through James Moor’s taxonomy, which categorises them into four distinct types: Ethical Impact Agents (EIAs), Implicit Ethical Agents (IEAs), Explicit Ethical Agents (EEAs), and Full Ethical Agents (FEAs). While EIAs and IEAs merely have an ethical impact by virtue of their existence or design, EEAs and FEAs are capable of making decisions based on ethical considerations. EEAs operate from ethics, while FEAs, akin to adult humans, possess attributes like consciousness, intentionality, and free will.

For the purpose of this discussion, the focus will be on EEAs and FEAs. EEAs exhibit interactivity, autonomy, and adaptability within their environment, enabling them to make moral decisions. FEAs, though debated for their potential to fully replicate human consciousness and free will, represent the ideal of human-like moral agency in artificial entities.

Can AMAs be built?

The feasibility of building AMAs hinges on embedding ethical frameworks into their design. Three primary approaches are considered: consequentialist, deontological, and virtuous frameworks. Consequentialist AMAs aim to maximise quantifiable benefits, yet they face challenges in balancing individual rights against collective good. Deontological AMAs, governed by rules, struggle with situational nuances that rules alone cannot address. Virtuous AMAs, designed to act from moral wisdom, would need to replicate human cognitive and affective capabilities, making them extremely difficult to engineer.

Engineering approaches to AMAs vary. Top-down methods attempt to encode ethical rules directly, often leading to restricted agency to avoid ethical dilemmas. Examples include the MedEthEx system in medical contexts and Level 4 Autonomous Vehicles (AVs), both of which operate under strict conditions to ensure ethical behaviour. The challenge, however, remains in creating more sophisticated AMAs capable of handling complex moral scenarios, as seen in the efforts to develop systems like the Learning Intelligent Distribution Agent (LIDA) and research into embodied cognition.

Should we build AMAs?

The debate on whether to build AMAs involves weighing existential risks, moral considerations, and practical applications. While some fear that advanced AMAs or Artificial General Intelligence (AGI) may not align with human values, the development of AMAs today could help ensure that future AGIs are value-aligned. Moreover, AMAs can enhance human moral decision-making, particularly in scenarios where human agents may be unavailable or less effective.

AMAs also have the potential to act as superior moral agents in certain contexts. For instance, EEAs, unaffected by emotional states, could perform better in high-stress environments like battlefields. Autonomous vehicles could make life-and-death decisions more rapidly than human drivers. However, these benefits come with the risk of perpetuating biases if AMAs are trained on flawed data.

To mitigate these risks, the design and development of AMAs must be guided by principles that ensure their moral capabilities scale with their agency. Moreover, the process must be inclusive, consultative, and value-driven to avoid reinforcing existing social inequities.

Principles for building AMAs

The construction of AMAs should adhere to the following principles:

1. Proportional agency: AMAs should have agency limited to their moral capabilities, ensuring they do not operate beyond their ethical understanding.

2. Complexity respect: AMAs should not oversimplify complex moral situations to fit their capabilities, as doing so could lead to ethically compromised decisions.

3. No manipulation : AMAs must avoid using manipulative tactics to achieve moral outcomes, as this could undermine societal trust and ethical integrity.

4. Avoiding mistreatment : AMAs should be designed to prevent their mistreatment, which could negatively impact human moral behaviour and lead to a decline in collective moral standards.

5. Scalable virtue: The ability of AMAs to act virtuously should scale with the degree of agency they are afforded, ensuring they can handle increasingly complex moral landscapes.

6. Inclusive development : The development process should be pluralistic and consultative, involving diverse stakeholders to ensure the AMAs are aligned with broad societal values.

7. Accountability and transparency: AMAs must be part of a responsibility network that governs their accountability and ensures their actions are transparent and explainable.

8. Substantive fairness: AMAs should operate within frameworks that prioritise substantive fairness, beyond mere algorithmic fairness, to ensure their decisions are just in real-world contexts.

9. Sustainable costs: The development and deployment of AMAs should consider the environmental and societal costs associated with AI, ensuring that these are understood, justified, and sustainable.

Conclusion

While the creation of artificial moral agents (AMAs) presents significant challenges, both technical and ethical, there is a strong case for their development. AMAs can augment human moral decision-making, contribute to the alignment of future AGIs with human values, and address scenarios where human agents are less effective. However, their development must be guided by a rigorous set of principles that ensure their moral integrity, prevent harm, and promote societal well-being.

Humanity should strive to build artificial moral agents (AMAs) that are ethically sound, accountable, and capable of operating within the complex moral landscapes of the real world. By adhering to the principles outlined above, we can ensure that AMAs contribute positively to society while mitigating the risks associated with their development and deployment. These principles, though general, offer a foundation upon which we can build a framework for the responsible creation of AMAs that support human values and ethical progress.

References

JH Moor, ‘The Nature, Importance, and Difficulty of Machine Ethics’ (200621 IEEE Intelligent Systems 1821.