Metodo

International Studies in Phenomenology and Philosophy

Journal | Volume | Article

216796

Moral mechanisms

David Davenport

pp. 47-60

Abstract

As highly intelligent autonomous robots are gradually introduced into the home and workplace, ensuring public safety becomes extremely important. Given that such machines will learn from interactions with their environment, standard safety engineering methodologies may not be applicable. Instead, we need to ensure that the machines themselves know right from wrong; we need moral mechanisms. Morality, however, has traditionally been considered a defining characteristic, indeed the sole realm of human beings; that which separates us from animals. But if only humans can be moral, can we build safe robots? If computationalism—roughly the thesis that cognition, including human cognition, is fundamentally computational—is correct, then morality cannot be restricted to human beings (since equivalent cognitive systems can be implemented in any medium). On the other hand, perhaps there is something special about our biological makeup that gives rise to morality, and so computationalism is effectively falsified. This paper examines these issues by looking at the nature of morals and the influence of biology. It concludes that moral behaviour is concerned solely with social well-being, independent of the nature of the individual agents that comprise the group. While our biological makeup is the root of our concept of morals and clearly affects human moral reasoning, there is no basis for believing that it will restrict the development of artificial moral agents. The consequences of such sophisticated artificial mechanisms living alongside natural human ones are also explored.

Publication details

Published in:

Gunkel David J., Bryson Joanna J. (2014) Machine morality. Philosophy & Technology 27 (1).

Pages: 47-60

DOI: 10.1007/s13347-013-0147-2

Full citation:

Davenport David (2014) „Moral mechanisms“. Philosophy & Technology 27 (1), 47–60.