Metodo

International Studies in Phenomenology and Philosophy

Series | Book | Chapter

202162

Machine medical ethics

when a human is delusive but the machine has its wits about him

Johan F. Hoorn

pp. 233-254

Abstract

When androids take care of delusive patients, ethic-epistemic concerns crop up about an agency's good intent and why we would follow its advice. Robots are not human but may deliver correct medical information, whereas Alzheimer patients are human but may be mistaken. If humanness is not the question, then do we base our trust on truth? True is what logically can be verified given certain principles, which you have to adhere to in the first place. In other words, truth comes full circle. Does it come from empirical validation, then? That is a hard one too because we access the world through our biased sense perceptions and flawed measurement tools. We see what we think we see. Probably, the attribution of ethical qualities comes from pragmatics: If an agency affords delivering the goods, it is a "good" agency. If that happens regularly and in a predictable manner, the agency becomes trustworthy. Computers can be made more predictable than Alzheimer patients and in that sense, may be considered morally "better" than delusive humans. That is, if we ignore the existence of graded liabilities. That is why I developed a responsibility self-test that can be used to navigate the moral mine field of ethical positions that evolves from differently weighing or prioritizing the principles of autonomy, non-maleficence, beneficence, and justice.

Publication details

Published in:

van Rysewyk Simon, Pontier Matthijs (2015) Machine medical ethics. Dordrecht, Springer.

Pages: 233-254

DOI: 10.1007/978-3-319-08108-3_15

Full citation:

Hoorn Johan F. (2015) „Machine medical ethics: when a human is delusive but the machine has its wits about him“, In: S. Van Rysewyk & M. Pontier (eds.), Machine medical ethics, Dordrecht, Springer, 233–254.