Metodo

International Studies in Phenomenology and Philosophy

Journal | Volume | Article

216750

Developing automated deceptions and the impact on trust

Frances S. GrodzinskyKeith W. MillerMarty J. Wolf

pp. 91-105

Abstract

As software developers design artificial agents (AAs), they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technology: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifact's development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception to gain our trust? Is trust generated through technological "enchantment" warranted? Next, we investigate more complex questions of how deception that involves AAs differs from deception that only involves humans. Finally, we analyze the role and responsibility of developers in trust situations that involve both humans and AAs.

Publication details

Published in:

Buchanan Elizabeth, Taddeo Mariarosaria (2015) Information societies, ethical enquiries. Philosophy & Technology 28 (1).

Pages: 91-105

DOI: 10.1007/s13347-014-0158-7

Full citation:

Grodzinsky Frances S., Miller Keith W., Wolf Marty J. (2015) „Developing automated deceptions and the impact on trust“. Philosophy & Technology 28 (1), 91–105.