Unfairness Detection and Evaluation in Data-driven Decision-making Algorithms
dc.contributor.author | Li, Jinyang | |
dc.date.accessioned | 2025-05-12T17:35:19Z | |
dc.date.available | 2025-05-12T17:35:19Z | |
dc.date.issued | 2025 | |
dc.date.submitted | 2025 | |
dc.identifier.uri | https://hdl.handle.net/2027.42/197102 | |
dc.description.abstract | Recent years have witnessed a surge in the application of data-driven algorithms to assist human decision-making across various sectors, including industry, government, and non-profit organizations. Many of these applications significantly impact our daily lives. Concerns are growing about the potential biases that may be present in the data, amplified in the algorithmic processes, or introduced by the algorithms themselves. Such biases have been observed to result in injustices, particularly against specific demographic groups, highlighting the need for careful examination and correction. These concerns have given rise to a recent body of literature, which has focused primarily on biases in alphanumeric relational tables and consequent biases in labels applied in a classification task (such as who to recruit). This thesis focuses on developing efficient algorithms to detect biases within richer, more complex datasets and assesses the fairness of outcomes in algorithmic tasks beyond simple classification. Specifically, the thesis addresses the following problems: Query Refinement for Diversity Constraints: Relational queries frequently define candidate pools based on available data sources. This research develops techniques to minimally modify these relational queries, ensuring that the outcomes meet specified diversity constraints for data groups in the result set. The objective is to select diverse candidate pools without compromising the core selection criteria. Under-representation in Ranking Evaluation: This thesis introduces methods to detect hidden under-representation in algorithmic rankings without pre-defined protected groups. In particular, the thesis identifies demographic groups disproportionately under-represented in top-ranked positions. Fairness Evaluation in Data Streams: This thesis recognizes the overlooked issue of fairness measurement in dynamic environments by proposing algorithms to monitor real-time fairness metrics with time decay for classification tasks in data streams. This methodology provides a continually updated reflection of fairness, capturing evolving biases effectively. | |
dc.language.iso | en_US | |
dc.subject | Algorithmic fairness | |
dc.subject | Data-driven decision-making algorithms | |
dc.subject | Query refinement | |
dc.subject | Under-representation | |
dc.subject | Data stream fairness | |
dc.title | Unfairness Detection and Evaluation in Data-driven Decision-making Algorithms | |
dc.type | Thesis | |
dc.description.thesisdegreename | PhD | |
dc.description.thesisdegreediscipline | Computer Science & Engineering | |
dc.description.thesisdegreegrantor | University of Michigan, Horace H. Rackham School of Graduate Studies | |
dc.contributor.committeemember | Jagadish, H V | |
dc.contributor.committeemember | Jacobs, Abigail Zoe | |
dc.contributor.committeemember | Fish, Benjamin | |
dc.contributor.committeemember | Makar, Maggie | |
dc.subject.hlbsecondlevel | Computer Science | |
dc.subject.hlbtoplevel | Engineering | |
dc.contributor.affiliationumcampus | Ann Arbor | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/197102/1/jinyli_1.pdf | |
dc.identifier.doi | https://dx.doi.org/10.7302/25528 | |
dc.identifier.orcid | 0000-0002-9203-2688 | |
dc.identifier.name-orcid | Li, Jinyang; 0000-0002-9203-2688 | en_US |
dc.working.doi | 10.7302/25528 | en |
dc.owningcollname | Dissertations and Theses (Ph.D. and Master's) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.