Show simple item record

Supporting Trust Calibration and Attention Management in Human-Machine Teams Through Training and Real-Time Feedback on Estimated Performance

dc.contributor.authorLieberman, Kevin
dc.date.accessioned2022-05-25T15:26:22Z
dc.date.available2024-05-01
dc.date.available2022-05-25T15:26:22Z
dc.date.issued2022
dc.date.submitted2022
dc.identifier.urihttps://hdl.handle.net/2027.42/172684
dc.description.abstractTrust, the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability (Lee & See, 2004), plays a critical role in supervisory control and human-machine teaming. Poor trust calibration, i.e., a lack of correspondence between a person’s trust in a system and its actual capabilities, leads to inappropriate reliance on, or rejection of the technology. Trust also affects attention management and monitoring of increasingly autonomous systems. Overtrust results in excessive neglect time (the time the machine agent operates without human intervention) while distrust makes operators spend too much time supervising a system at the cost of performing other tasks. To address these challenges, this research examined how training and real-time information about system confidence can support trust calibration and effective monitoring of modern technologies. Specifically, the aims of this research were (1) to compare the effectiveness of active, experiential training with more traditional forms of instruction on mental model development, trust resolution (i.e. the ability to distinguish contexts when a machine can be trusted versus when it requires close supervision), and attention management (experiment 1), and (2) to assess how various visual and auditory representations of a machine’s confidence in its own ability (experiments 2 and 3) and the framing of a machine’s estimated accuracy as confidence or uncertainty (experiment 3) affect trust specificity (i.e. shifts in trust based on incremental variations in machine capability over time), monitoring, and reliance on technology. The research was conducted in the context of supervisory control of multiple unmanned aerial vehicles (UAVs). The first, longitudinal study showed that participants who received experiential training had the fewest gaps in their mental model of the multi-UAV system, compared to participants who received more traditional training methods. They appropriately lowered their trust and monitored a UAV’s health more closely when its environment reduced the UAV’s capabilities. Findings from the second and third studies demonstrated that real-time feedback on a machine’s estimated accuracy facilitates trust specificity and effective monitoring. Specifically, the second study compared visual and auditory representations of system confidence. It showed that the choice of display depends on the intended domain of use. Auditory confidence displays are preferable to visual indications in environments that suffer from visual data overload as the former avoid resource competition and support time sharing. The third study compared two different visual representations (hue- versus salience-based) of system confidence and examined the impact of framing a machine’s estimated accuracy as confidence or uncertainty. Indicating a machine’s uncertainty (rather than confidence) in its performance led to closer monitoring of UAVs and smaller trust decrements when the machine’s estimated accuracy was low. Also, participants were better able to distinguish between levels of confidence and uncertainty with a hue-based representation that employed a familiar color scheme (red-yellow-green), compared with a monochrome salience-based representation. At a conceptual level, this research adds to the knowledge base in trust, transparency, and attention management related to supervisory control and human-machine teaming in high tempo, complex environments. This line of research also makes significant contributions to the development and validation of subjective and eyetracking-based methods for assessing trust in technology. Finally, from an applied perspective, the findings can inform the design of training and interfaces to support the safe adoption and operation of human-machine systems in a wide range of safety-critical domains.
dc.language.isoen_US
dc.subjecttrust in automation
dc.subjectsupervisory control
dc.subjectexperiential learning
dc.subjectattentional processes
dc.subjectshared mental models
dc.subjecthuman machine teaming
dc.titleSupporting Trust Calibration and Attention Management in Human-Machine Teams Through Training and Real-Time Feedback on Estimated Performance
dc.typeThesis
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineRobotics
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.contributor.committeememberAtkins, Ella Marie
dc.contributor.committeememberSarter, Nadine Barbara
dc.contributor.committeememberStirling, Leia
dc.contributor.committeememberYang, Xi (Jessie)
dc.subject.hlbsecondlevelAerospace Engineering
dc.subject.hlbsecondlevelIndustrial and Operations Engineering
dc.subject.hlbtoplevelEngineering
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/172684/1/klieberm_1.pdf
dc.identifier.doihttps://dx.doi.org/10.7302/4713
dc.identifier.orcid0000-0002-3136-2050
dc.identifier.name-orcidLieberman, Kevin; 0000-0002-3136-2050en_US
dc.working.doi10.7302/4713en
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.