Metodo

International Studies in Phenomenology and Philosophy

Series | Book | Chapter

200444

Moral enhancement and artificial intelligence

moral AI?

Julian SavulescuHannah Maslen

pp. 79-95

Abstract

This chapter explores the possibility of moral artificial intelligence – what it might look like and what it might achieve. Against the backdrop of the enduring limitations of human moral psychology and the pressing challenges inherent in a globalised world, we argue that an AI that could monitor, prompt and advise on moral behaviour could help human agents overcome some of their inherent limitations. Such an AI could monitor physical and environmental factors that affect moral decision-making, could identify and make agents aware of their biases, and could advise agents on the right course of action, based on the agent's moral values. A common objection to the concept of moral enhancement is that, since a single account of right action cannot be agreed upon, the project of moral enhancement is doomed to failure. We argue that insofar as this is a problem, it is a problem for some biomedical interventions, but an agent-tailored moral AI would not only preserve pluralism of moral values but would also enhance the agent's autonomy by helping him to overcome his natural psychological limitations. In this way moral AI has one advantage over other forms of biomedical moral enhancement.

Publication details

Published in:

Romportl Jan, Zackova Eva, Kelemen Jozef (2015) Beyond artificial intelligence: the disappearing human-machine divide. Dordrecht, Springer.

Pages: 79-95

DOI: 10.1007/978-3-319-09668-1_6

Full citation:

Savulescu Julian, Maslen Hannah (2015) „Moral enhancement and artificial intelligence: moral AI?“, In: J. Romportl, E. Zackova & J. Kelemen (eds.), Beyond artificial intelligence, Dordrecht, Springer, 79–95.