Show simple item record

Unfairness Detection and Evaluation in Data-driven Decision-making Algorithms

dc.contributor.authorLi, Jinyang
dc.date.accessioned2025-05-12T17:35:19Z
dc.date.available2025-05-12T17:35:19Z
dc.date.issued2025
dc.date.submitted2025
dc.identifier.urihttps://hdl.handle.net/2027.42/197102
dc.description.abstractRecent years have witnessed a surge in the application of data-driven algorithms to assist human decision-making across various sectors, including industry, government, and non-profit organizations. Many of these applications significantly impact our daily lives. Concerns are growing about the potential biases that may be present in the data, amplified in the algorithmic processes, or introduced by the algorithms themselves. Such biases have been observed to result in injustices, particularly against specific demographic groups, highlighting the need for careful examination and correction. These concerns have given rise to a recent body of literature, which has focused primarily on biases in alphanumeric relational tables and consequent biases in labels applied in a classification task (such as who to recruit). This thesis focuses on developing efficient algorithms to detect biases within richer, more complex datasets and assesses the fairness of outcomes in algorithmic tasks beyond simple classification. Specifically, the thesis addresses the following problems: Query Refinement for Diversity Constraints: Relational queries frequently define candidate pools based on available data sources. This research develops techniques to minimally modify these relational queries, ensuring that the outcomes meet specified diversity constraints for data groups in the result set. The objective is to select diverse candidate pools without compromising the core selection criteria. Under-representation in Ranking Evaluation: This thesis introduces methods to detect hidden under-representation in algorithmic rankings without pre-defined protected groups. In particular, the thesis identifies demographic groups disproportionately under-represented in top-ranked positions. Fairness Evaluation in Data Streams: This thesis recognizes the overlooked issue of fairness measurement in dynamic environments by proposing algorithms to monitor real-time fairness metrics with time decay for classification tasks in data streams. This methodology provides a continually updated reflection of fairness, capturing evolving biases effectively.
dc.language.isoen_US
dc.subjectAlgorithmic fairness
dc.subjectData-driven decision-making algorithms
dc.subjectQuery refinement
dc.subjectUnder-representation
dc.subjectData stream fairness
dc.titleUnfairness Detection and Evaluation in Data-driven Decision-making Algorithms
dc.typeThesis
dc.description.thesisdegreenamePhD
dc.description.thesisdegreedisciplineComputer Science & Engineering
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.contributor.committeememberJagadish, H V
dc.contributor.committeememberJacobs, Abigail Zoe
dc.contributor.committeememberFish, Benjamin
dc.contributor.committeememberMakar, Maggie
dc.subject.hlbsecondlevelComputer Science
dc.subject.hlbtoplevelEngineering
dc.contributor.affiliationumcampusAnn Arbor
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/197102/1/jinyli_1.pdf
dc.identifier.doihttps://dx.doi.org/10.7302/25528
dc.identifier.orcid0000-0002-9203-2688
dc.identifier.name-orcidLi, Jinyang; 0000-0002-9203-2688en_US
dc.working.doi10.7302/25528en
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.