Three Strikes and You are Out!: The Impacts of Multiple Human-Robot Trust Violations and Repairs on Robot Trustworthiness
dc.contributor.author | Esterwood, Connor | |
dc.contributor.author | Robert, Lionel Jr | |
dc.date.accessioned | 2023-01-19T14:06:55Z | |
dc.date.available | 2023-01-19T14:06:55Z | |
dc.date.issued | 2023-01-19 | |
dc.identifier.citation | Esterwood, C. and Robert, L. P. (2023). Three Strikes and You are Out!: The Impacts of Multiple Human-Robot Trust Violations and Repairs on Robot Trustworthiness, Computers in Human Behavior, forthcoming. | en_US |
dc.identifier.uri | https://hdl.handle.net/2027.42/175560 | en |
dc.description.abstract | Robots like human co-workers can make mistakes violating a human’s trust in them. When mistakes happen, humans can see robots as less trustworthy which ultimately decreases their trust in them. Trust repair strategies can be employed to mitigate the negative impacts of these trust violations. Yet, it is not clear whether such strategies can fully repair trust or how effective they are after repeated trust violations. To address these shortcomings, this study examined the impact of four distinct trust repair strategies: apologies, denials, explanations, and promises on overall trustworthiness and its sub-dimensions: ability, benevolence, and integrity after repeated trust violations. To accomplish this, a between-subjects experiment was conducted where participants worked with a robot co-worker to accomplish a task. The robot violated the participant’s trust and then provided a particular repair strategy. Results indicated that after repeated trust violations, none of the repair strategies ever fully repaired trustworthiness and two of its sub-dimensions: ability and integrity. In addition, after repeated interactions, apologies, explanations, and promises appeared to function similarly to one another, while denials were consistently the least effective at repairing trustworthiness and its sub-dimensions. In sum, this paper contributes to the literature on human—robot trust repair through both of these original findings. | en_US |
dc.description.sponsorship | University of Michigan | en_US |
dc.description.sponsorship | Emerging Technologies Group (DMC) | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | Computers in Human Behavior | en_US |
dc.subject | Human–Robot Interaction | en_US |
dc.subject | Trust Repair | en_US |
dc.subject | Robot Error Recovery | en_US |
dc.subject | robot trustworthiness | en_US |
dc.subject | Robot benevolence | en_US |
dc.subject | Robot Integrity | en_US |
dc.subject | Robot ability | en_US |
dc.subject | robot trust repair | en_US |
dc.subject | human-robot trust repair | en_US |
dc.subject | human robot teaming | en_US |
dc.subject | human robot collaboration | en_US |
dc.subject | robot co-worker | en_US |
dc.subject | apologies | en_US |
dc.subject | robot apologies | en_US |
dc.subject | explanations | en_US |
dc.subject | robot explanations | en_US |
dc.subject | promises | en_US |
dc.subject | robot promises | en_US |
dc.subject | denials | en_US |
dc.subject | robot denials | en_US |
dc.subject | Warehouse Robot Interaction Simulator | en_US |
dc.subject | Artificial Intelligence Trust Repair | en_US |
dc.subject | Artificial Intelligence apologies | en_US |
dc.subject | Artificial Intelligence explanations | en_US |
dc.subject | Artificial Intelligence Promises | en_US |
dc.subject | Artificial Intelligence Denials | en_US |
dc.subject | Artificial Intelligence Trustworthiness | en_US |
dc.title | Three Strikes and You are Out!: The Impacts of Multiple Human-Robot Trust Violations and Repairs on Robot Trustworthiness | en_US |
dc.type | Article | en_US |
dc.subject.hlbsecondlevel | Information Science | |
dc.subject.hlbtoplevel | Social Sciences | |
dc.description.peerreviewed | Peer Reviewed | en_US |
dc.contributor.affiliationum | Information, School of | en_US |
dc.contributor.affiliationum | Robotics Institute | en_US |
dc.contributor.affiliationum | Robotics Department | en_US |
dc.contributor.affiliationumcampus | Ann Arbor | en_US |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/175560/1/Esterwood and Robert 2023 (CHB).pdf | |
dc.identifier.doi | https://dx.doi.org/10.7302/6774 | |
dc.identifier.source | Computers in Human Behavior | en_US |
dc.identifier.orcid | 0000-0002-1410-2601 | en_US |
dc.identifier.orcid | 0000-0002-2685-6435 | en_US |
dc.description.filedescription | Description of Esterwood and Robert 2023 (CHB).pdf : Preprint | |
dc.description.depositor | SELF | en_US |
dc.identifier.name-orcid | Robert, Lionel P.; 0000-0002-1410-2601 | en_US |
dc.identifier.name-orcid | Esterwood, Connor; 0000-0002-2685-6435 | en_US |
dc.working.doi | 10.7302/6774 | en_US |
dc.owningcollname | Information, School of (SI) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.