Show simple item record

Where are the Humans in Human-AI Interaction: The Missing Human-Centered Perspective on Interpretability Tools for Machine Learning

dc.contributor.authorKaur, Harmanpreet
dc.date.accessioned2023-09-22T15:32:59Z
dc.date.available2023-09-22T15:32:59Z
dc.date.issued2023
dc.date.submitted2023
dc.identifier.urihttps://hdl.handle.net/2027.42/177953
dc.description.abstractThis dissertation aims to provide a richer understanding of the extent to which people understand complex AI and ML outputs and reasoning, what influences their understanding, and how we can continue to enhance it going forward. AI- and ML-based systems are now routinely deployed in real-world settings, including sensitive domains like criminal justice, healthcare, finance, and public policy. Given their rapidly growing ubiquity, understanding how AI and ML work is a prerequisite for responsibly designing, deploying, and using these systems. With interpretability and explainability approaches, these systems can offer explanations for their outputs to aid human understanding. Though these approaches rely on guidelines for how humans explain things to each other, they ultimately solve for improving an artifact (e.g., an explanation or explanation system). My dissertation makes the argument that helping people understand AI and ML is as much a human problem as a technical one. Detailing this vital human-centered piece, I present work that shows that current interpretability and explainability tools do not meet their intended goals for a key stakeholder, ML practitioners, who end up either over- or under-utilizing these tools. Investigating the reasons behind this behavior, I apply the cognitive model of bounded rationality to the human-machine setting. Under this model of decision-making, people select plausible options based on their prior heuristics rather than internalizing all relevant information. I find significant evidence showing that interpretability tools exacerbate the application of bounded rationality. As a solution, I present a new framework for re-imagining interpretability and explainability based on sensemaking theory from organizational studies. This Sensible AI framework prescribes design guidelines grounded in nuances of human cognition—facets of the individual and their environmental, social, and organizational context.
dc.language.isoen_US
dc.subjectInterpretable Machine Learning
dc.subjectExplainable Artificial Intelligence
dc.subjectSensemaking
dc.subjectBounded Rationality in ML
dc.titleWhere are the Humans in Human-AI Interaction: The Missing Human-Centered Perspective on Interpretability Tools for Machine Learning
dc.typeThesis
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineInfo & Comp Sci & Engin PhD
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.contributor.committeememberGilbert, Eric Edmund
dc.contributor.committeememberLampe, Cliff
dc.contributor.committeememberAdar, Eytan
dc.contributor.committeememberAckerman, Mark
dc.contributor.committeememberIqbal, Shamsi
dc.contributor.committeememberVaughan, Jennifer Wortman
dc.subject.hlbsecondlevelComputer Science
dc.subject.hlbsecondlevelInformation and Library Science
dc.subject.hlbtoplevelEngineering
dc.subject.hlbtoplevelSocial Sciences
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/177953/1/harmank_1.pdf
dc.identifier.doihttps://dx.doi.org/10.7302/8410
dc.identifier.orcid0009-0009-8239-937X
dc.identifier.name-orcidKaur, Harmanpreet; 0009-0009-8239-937Xen_US
dc.working.doi10.7302/8410en
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.