Metodo

International Studies in Phenomenology and Philosophy

Journal | Volume | Article

216603

Algorithmic decision-making based on machine learning from big data

can transparency restore accountability?

Paul B. de Laat

pp. 525-541

Abstract

Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves ("gaming the system" in particular), the potential loss of companies' competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are inherently opaque. It is concluded that, at least presently, full transparency for oversight bodies alone is the only feasible option; extending it to the public at large is normally not advisable. Moreover, it is argued that algorithmic decisions preferably should become more understandable; to that effect, the models of machine learning to be employed should either be interpreted ex post or be interpretable by design ex ante.

Publication details

Published in:

d'Agostino Marcello, Durante Massimo (2018) The governance of algorithms. Philosophy & Technology 31 (4).

Pages: 525-541

DOI: 10.1007/s13347-017-0293-z

Full citation:

de Laat Paul B. (2018) „Algorithmic decision-making based on machine learning from big data: can transparency restore accountability?“. Philosophy & Technology 31 (4), 525–541.