Book | Chapter
A design methodology for trust cue calibration in cognitive agents
pp. 251-262
Abstract
As decision support systems have developed more advanced algorithms to support the human user, it is increasingly difficult for operators to verify and understand how the automation comes to its decision. This paper describes a design methodology to enhance operators' decision making by providing trust cues so that their perceived trustworthiness of a system matches its actual trustworthiness, thus yielding calibrated trust. These trust cues consist of visualizations to diagnose the actual trustworthiness of the system by showing the risk and uncertainty of the associated information. We present a trust cue design taxonomy that lists all possible information that can influence a trust judgment. We apply this methodology to a scenario with advanced automation that manages missions for multiple unmanned vehicles and shows specific trust cues for 5 levels of trust evidence. By focusing on both individual operator trust and the transparency of the system, our design approach allows for calibrated trust for optimal decision-making to support operators during all phases of mission execution.
Publication details
Published in:
Shumaker Randall, Lackey Stephanie (2014) Virtual, augmented and mixed reality. designing and developing virtual and augmented environments: Designing and developing virtual and augmented environments. Dordrecht, Springer.
Pages: 251-262
DOI: 10.1007/978-3-319-07458-0_24
Full citation:
de Visser Ewart J., Cohen Marvin, Freedy Amos, Parasuraman Raja (2014) „A design methodology for trust cue calibration in cognitive agents“, In: R. Shumaker & S. Lackey (eds.), Virtual, augmented and mixed reality. designing and developing virtual and augmented environments, Dordrecht, Springer, 251–262.