Paging Dr. JARVIS! Do people accept risk management advice from artificial intelligence in consequential decision contexts?
dc.contributor.author | Larkin, Connor | |
dc.contributor.advisor | Arvai, Joseph | |
dc.date.accessioned | 2020-12-08T14:51:23Z | |
dc.date.available | NO_RESTRICTION | en_US |
dc.date.available | 2020-12-08T14:51:23Z | |
dc.date.issued | 2020-12 | |
dc.date.submitted | 2020 | |
dc.identifier.uri | https://hdl.handle.net/2027.42/163662 | |
dc.description.abstract | Artificial intelligence (AI), a branch of computer science based upon algorithms that can analyze data and make decisions autonomously, is becoming increasingly prevalent in the technology that powers modern society. Relatively little research has examined how humans modify their judgments in response to their interactions with AI. Our research explores how people respond to different types of risk management advice received from AI vs. a human expert in two contexts where AI is commonly deployed: medicine and finance. Through online studies with representative samples of Americans, we first find that participants generally prefer to receive medical and financial risk management advice from humans over AI. In two follow-up studies, we presented participants with a hypothetical medical or financial risk and asked them to make an initial decision—to address the risk immediately or to wait for more information—and to rate their confidence in this decision. Next, participants were informed that either a human expert or AI had analyzed their case and recommended either immediate risk management action or a wait-and-see approach. Participant then made a final decision using the same response scale as before. We compared participants’ initial and final decisions, examining the extent to which participants updated their decisions upon receiving their recommendation as a function of the recommendation itself and its source. We find that participants updated their decisions to a greater degree in response to recommendations from human experts as compared to AI, but the magnitude of this effect differed by context. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | artificial intelligence | en_US |
dc.subject | decision making | en_US |
dc.subject | risk | en_US |
dc.subject | medicine | en_US |
dc.title | Paging Dr. JARVIS! Do people accept risk management advice from artificial intelligence in consequential decision contexts? | en_US |
dc.type | Thesis | en_US |
dc.description.thesisdegreename | Master of Science (MS) | en_US |
dc.description.thesisdegreediscipline | School for Environment and Sustainability | en_US |
dc.description.thesisdegreegrantor | University of Michigan | en_US |
dc.contributor.committeemember | Drummond, Caitlin | |
dc.identifier.uniqname | cplarkin | en_US |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/163662/1/Larkin, Connor thesis.pdf | |
dc.owningcollname | Dissertations and Theses (Ph.D. and Master's) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.