Where are the Humans in Human-AI Interaction: The Missing Human-Centered Perspective on Interpretability Tools for Machine Learning
dc.contributor.author | Kaur, Harmanpreet | |
dc.date.accessioned | 2023-09-22T15:32:59Z | |
dc.date.available | 2023-09-22T15:32:59Z | |
dc.date.issued | 2023 | |
dc.date.submitted | 2023 | |
dc.identifier.uri | https://hdl.handle.net/2027.42/177953 | |
dc.description.abstract | This dissertation aims to provide a richer understanding of the extent to which people understand complex AI and ML outputs and reasoning, what influences their understanding, and how we can continue to enhance it going forward. AI- and ML-based systems are now routinely deployed in real-world settings, including sensitive domains like criminal justice, healthcare, finance, and public policy. Given their rapidly growing ubiquity, understanding how AI and ML work is a prerequisite for responsibly designing, deploying, and using these systems. With interpretability and explainability approaches, these systems can offer explanations for their outputs to aid human understanding. Though these approaches rely on guidelines for how humans explain things to each other, they ultimately solve for improving an artifact (e.g., an explanation or explanation system). My dissertation makes the argument that helping people understand AI and ML is as much a human problem as a technical one. Detailing this vital human-centered piece, I present work that shows that current interpretability and explainability tools do not meet their intended goals for a key stakeholder, ML practitioners, who end up either over- or under-utilizing these tools. Investigating the reasons behind this behavior, I apply the cognitive model of bounded rationality to the human-machine setting. Under this model of decision-making, people select plausible options based on their prior heuristics rather than internalizing all relevant information. I find significant evidence showing that interpretability tools exacerbate the application of bounded rationality. As a solution, I present a new framework for re-imagining interpretability and explainability based on sensemaking theory from organizational studies. This Sensible AI framework prescribes design guidelines grounded in nuances of human cognition—facets of the individual and their environmental, social, and organizational context. | |
dc.language.iso | en_US | |
dc.subject | Interpretable Machine Learning | |
dc.subject | Explainable Artificial Intelligence | |
dc.subject | Sensemaking | |
dc.subject | Bounded Rationality in ML | |
dc.title | Where are the Humans in Human-AI Interaction: The Missing Human-Centered Perspective on Interpretability Tools for Machine Learning | |
dc.type | Thesis | |
dc.description.thesisdegreename | PhD | en_US |
dc.description.thesisdegreediscipline | Info & Comp Sci & Engin PhD | |
dc.description.thesisdegreegrantor | University of Michigan, Horace H. Rackham School of Graduate Studies | |
dc.contributor.committeemember | Gilbert, Eric Edmund | |
dc.contributor.committeemember | Lampe, Cliff | |
dc.contributor.committeemember | Adar, Eytan | |
dc.contributor.committeemember | Ackerman, Mark | |
dc.contributor.committeemember | Iqbal, Shamsi | |
dc.contributor.committeemember | Vaughan, Jennifer Wortman | |
dc.subject.hlbsecondlevel | Computer Science | |
dc.subject.hlbsecondlevel | Information and Library Science | |
dc.subject.hlbtoplevel | Engineering | |
dc.subject.hlbtoplevel | Social Sciences | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/177953/1/harmank_1.pdf | |
dc.identifier.doi | https://dx.doi.org/10.7302/8410 | |
dc.identifier.orcid | 0009-0009-8239-937X | |
dc.identifier.name-orcid | Kaur, Harmanpreet; 0009-0009-8239-937X | en_US |
dc.working.doi | 10.7302/8410 | en |
dc.owningcollname | Dissertations and Theses (Ph.D. and Master's) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.