Supporting Trust Calibration and Attention Management in Human-Machine Teams Through Training and Real-Time Feedback on Estimated Performance
Lieberman, Kevin
2022
Abstract
Trust, the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability (Lee & See, 2004), plays a critical role in supervisory control and human-machine teaming. Poor trust calibration, i.e., a lack of correspondence between a person’s trust in a system and its actual capabilities, leads to inappropriate reliance on, or rejection of the technology. Trust also affects attention management and monitoring of increasingly autonomous systems. Overtrust results in excessive neglect time (the time the machine agent operates without human intervention) while distrust makes operators spend too much time supervising a system at the cost of performing other tasks. To address these challenges, this research examined how training and real-time information about system confidence can support trust calibration and effective monitoring of modern technologies. Specifically, the aims of this research were (1) to compare the effectiveness of active, experiential training with more traditional forms of instruction on mental model development, trust resolution (i.e. the ability to distinguish contexts when a machine can be trusted versus when it requires close supervision), and attention management (experiment 1), and (2) to assess how various visual and auditory representations of a machine’s confidence in its own ability (experiments 2 and 3) and the framing of a machine’s estimated accuracy as confidence or uncertainty (experiment 3) affect trust specificity (i.e. shifts in trust based on incremental variations in machine capability over time), monitoring, and reliance on technology. The research was conducted in the context of supervisory control of multiple unmanned aerial vehicles (UAVs). The first, longitudinal study showed that participants who received experiential training had the fewest gaps in their mental model of the multi-UAV system, compared to participants who received more traditional training methods. They appropriately lowered their trust and monitored a UAV’s health more closely when its environment reduced the UAV’s capabilities. Findings from the second and third studies demonstrated that real-time feedback on a machine’s estimated accuracy facilitates trust specificity and effective monitoring. Specifically, the second study compared visual and auditory representations of system confidence. It showed that the choice of display depends on the intended domain of use. Auditory confidence displays are preferable to visual indications in environments that suffer from visual data overload as the former avoid resource competition and support time sharing. The third study compared two different visual representations (hue- versus salience-based) of system confidence and examined the impact of framing a machine’s estimated accuracy as confidence or uncertainty. Indicating a machine’s uncertainty (rather than confidence) in its performance led to closer monitoring of UAVs and smaller trust decrements when the machine’s estimated accuracy was low. Also, participants were better able to distinguish between levels of confidence and uncertainty with a hue-based representation that employed a familiar color scheme (red-yellow-green), compared with a monochrome salience-based representation. At a conceptual level, this research adds to the knowledge base in trust, transparency, and attention management related to supervisory control and human-machine teaming in high tempo, complex environments. This line of research also makes significant contributions to the development and validation of subjective and eyetracking-based methods for assessing trust in technology. Finally, from an applied perspective, the findings can inform the design of training and interfaces to support the safe adoption and operation of human-machine systems in a wide range of safety-critical domains.Deep Blue DOI
Subjects
trust in automation supervisory control experiential learning attentional processes shared mental models human machine teaming
Types
Thesis
Metadata
Show full item recordCollections
Remediation of Harmful Language
The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.