Show simple item record

The theory of mind and human–robot trust repair

dc.contributor.authorEsterwood, C
dc.contributor.authorRobert, LP
dc.coverage.spatialEngland
dc.date.accessioned2024-01-15T13:12:58Z
dc.date.available2024-01-15T13:12:58Z
dc.date.issued2023-12-01
dc.identifier.issn2045-2322
dc.identifier.issn2045-2322
dc.identifier.urihttps://www.ncbi.nlm.nih.gov/pubmed/37337033
dc.identifier.urihttps://hdl.handle.net/2027.42/192038en
dc.description.abstractNothing is perfect and robots can make as many mistakes as any human, which can lead to a decrease in trust in them. However, it is possible, for robots to repair a human’s trust in them after they have made mistakes through various trust repair strategies such as apologies, denials, and promises. Presently, the efficacy of these trust repairs in the human–robot interaction literature has been mixed. One reason for this might be that humans have different perceptions of a robot’s mind. For example, some repairs may be more effective when humans believe that robots are capable of experiencing emotion. Likewise, other repairs might be more effective when humans believe robots possess intentionality. A key element that determines these beliefs is mind perception. Therefore understanding how mind perception impacts trust repair may be vital to understanding trust repair in human–robot interaction. To investigate this, we conducted a study involving 400 participants recruited via Amazon Mechanical Turk to determine whether mind perception influenced the effectiveness of three distinct repair strategies. The study employed an online platform where the robot and participant worked in a warehouse to pick and load 10 boxes. The robot made three mistakes over the course of the task and employed either a promise, denial, or apology after each mistake. Participants then rated their trust in the robot before and after it made the mistake. Results of this study indicated that overall, individual differences in mind perception are vital considerations when seeking to implement effective apologies and denials between humans and robots.
dc.format.mediumElectronic
dc.languageeng
dc.publisherSpringer Nature
dc.relation.haspartARTN 9877
dc.rightsLicence for published version: Creative Commons Attribution 4.0 International
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectHumans
dc.subjectTrust
dc.subjectRobotics
dc.subjectTheory of Mind
dc.subjectEmotions
dc.subjectIndividuality
dc.titleThe theory of mind and human–robot trust repair
dc.typeArticle
dc.identifier.pmid37337033
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/192038/2/The theory of mind and human-robot trust repair.pdf
dc.identifier.doi10.1038/s41598-023-37032-0
dc.identifier.doihttps://dx.doi.org/10.7302/22039
dc.identifier.sourceScientific Reports
dc.description.versionPublished version
dc.date.updated2024-01-15T13:12:52Z
dc.identifier.orcid0000-0002-2685-6435
dc.identifier.orcid0000-0002-1410-2601
dc.identifier.volume13
dc.identifier.issue1
dc.identifier.startpage9877
dc.identifier.name-orcidEsterwood, C; 0000-0002-2685-6435
dc.identifier.name-orcidRobert, LP; 0000-0002-1410-2601
dc.working.doi10.7302/22039en
dc.owningcollnameInformation, School of (SI)


Files in this item

Show simple item record

Licence for published version: Creative Commons Attribution 4.0 International
Except where otherwise noted, this item's license is described as Licence for published version: Creative Commons Attribution 4.0 International

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.