Systematically Predicting and Preventing Overreliance Through the Human-AI-System Concordance (HASC) Matrix and Cognitive Interventions
Doh, Wonji
2025-04-26
Abstract
The integration of Artificial Intelligence (AI) into high-stakes decision processes has fundamentally transformed how consequential judgments are made across healthcare, finance, and judicial domains. While Human-AI Collaboration (HAC) promises to combine AI's computational power with human intuition and ethical judgment to achieve superior outcomes, a persistent challenge undermines this potential: overreliance—the tendency for humans to uncritically accept AI recommendations despite contradictory evidence. This critical barrier prevents effective HAC and can lead to errors exceeding those made by either humans or AI systems operating independently. This thesis addresses this challenge through a comprehensive theoretical and empirical approach designed to predict when overreliance occurs and validate effective interventions for mitigating it. Study I introduces the Human-AI-System Concordance (HASC) Matrix—a novel taxonomic framework that extends beyond Signal Detection Theory (SDT)'s binary limitations by systematically mapping the triadic relationship between Ground Truth, AI Decision, and Human Decision. This taxonomy identifies eight distinct collaborative scenarios and identifies conditions where overreliance manifests: Blind Trust, where humans follow incorrect AI recommendations despite accurate ground truth (e.g., clinicians accepting AI's false-negative diagnoses, missing tumors in cancer patients), and Misguided Consensus, where humans accept AI's false-positive assessments that contradict ground truth (e.g., clinicians agreeing with AI's misidentification of non-cancerous conditions). The HASC Matrix provides researchers and xiv practitioners with singular diagnostic precision for identifying overreliance vulnerability points in AI-assisted collaborative systems. Having established this theoretical foundation, Study II empirically validates intervention strategies through a controlled experiment with 36 participants evaluating AI-assisted information assessment tasks. The study demonstrates that Cognitive Forcing Functions (CFFs)—particularly Active Verification, which requires users to answer content-specific questions before accepting AI recommendations—significantly enhances decision accuracy and reduces overreliance compared to alternative approaches. The study further illuminates how individual differences in cognitive motivation (measured by Need for Cognition), task complexity, and education level moderate intervention effectiveness and shape reliance patterns. This integrated framework bridges theoretical understanding and practical application by providing both diagnostic tools to identify overreliance vulnerabilities and validated interventions to address them. The research has direct implications for system design across domains—from clinical decision support to judicial risk assessment and financial oversight—enabling the creation of collaborative interfaces that preserve meaningful human agency while effectively leveraging AI capabilities. By ensuring AI truly enhances rather than supplants human judgment, this work establishes a foundation for responsible human-AI partnerships that can optimize decision quality while maintaining human oversight in contexts where judgments can profoundly impact lives, fairness, and opportunity.Deep Blue DOI
Subjects
Human-AI Collaboration Overreliance Trust Calibration Cognitive Forcing Functions Decision Support HASC Matrix
Types
Thesis
Metadata
Show full item recordRemediation of Harmful Language
The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.