Improving Collaboration Between Drivers and Automated Vehicles with Trust Processing Methods
Azevedo Sa, Hebert
2021
Abstract
Trust has gained attention in the Human-Robot Interaction (HRI) field, as it is considered an antecedent of people's reliance on machines. In general, people are likely to rely on and use machines they trust, and to refrain from using machines they do not trust. Recent advances in robotic perception technologies open paths for the development of machines that can be aware of people's trust by observing their human behaviors. This dissertation explores the role of trust in the interactions between humans and robots, particularly Automated Vehicles (AVs). Novel methods and models are proposed for perceiving and processing drivers' trust in AVs and for determining both humans' natural trust and robots' artificial trust. Two high-level problems are addressed in this dissertation: (1) the problem of avoiding or reducing miscalibrations of drivers' trust in AVs, and (2) the problem of how trust can be used to dynamically allocate tasks between a human and a robot that collaborate. A complete solution is proposed for the problem of avoiding or reducing trust miscalibrations. This solution combines methods for estimating and influencing drivers' trust through interactions with the AV. Three main contributions stem from that solution: (i) the characterization of risk factors that affect drivers’ trust in AVs, which provided theoretical evidence for the development of a linear model for driver trust in AVs; (ii) the development of a new method for real-time trust estimation, which leveraged the trust linear model mentioned above for the implementation of a Kalman-filter-based approach, able to provide numerical estimates from the processing of drivers' behavioral measurements; and (iii) the development of a new method for trust calibration, which identifies trust miscalibration instances from comparisons between drivers' trust in the AV and that AV's capabilities, and triggers messages from the AV to the driver. These messages are effective for encouraging or warning drivers that are undertrusting or overtrusting the AV capabilities respectively as shown by the obtained results. Although the development of a trust-based solution for dynamically allocating tasks between a human and a robot (i.e., the second high-level problem addressed in this dissertation) remains an open problem, we take a step forward in that direction. The fourth contribution of this dissertation is the development of a unified bi-directional model for predicting natural and artificial trust. This trust model is based on mathematical representations of both the trustee agent's capabilities and the required capabilities for the execution of a task. Trust emerges from comparisons between the agent capabilities and task requirements, roughly replicating the following logic: if a trustee agent's capabilities exceed the requirements for executing a certain task, then the agent can be highly trusted (to execute that task); conversely, if that trustee agent's capabilities fall short of that task requirements, trust should be low. In this trust model, the agent's capabilities are represented by random variables that are dynamically updated over interactions between the trustor and the trustee whenever the trustee is successful or fails in the execution of a task. These capability representations allow for the numerical computation of human's trust or robot's trust, which is represented by the probability of a given trustee agent to execute a given task successfully.Deep Blue DOI
Subjects
Driver-Automated Vehicle Trust
Types
Thesis
Metadata
Show full item recordCollections
Remediation of Harmful Language
The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.