Show simple item record

Dealing with Intransitivity, Non-Convexity, and Algorithmic Bias in Preference Learning

dc.contributor.authorBower, Amanda Ruth
dc.date.accessioned2021-02-04T16:37:57Z
dc.date.available2021-02-04T16:37:57Z
dc.date.issued2020
dc.date.submitted2020
dc.identifier.urihttps://hdl.handle.net/2027.42/166120
dc.description.abstractRankings are ubiquitous since they are a natural way to present information to people who are making decisions. There are seemingly countless scenarios where rankings arise, such as deciding whom to hire at a company, determining what movies to watch, purchasing products, understanding human perception, judging science fair projects, voting for political candidates, and so on. In many of these scenarios, the number of items in consideration is prohibitively large, such that asking someone to rank all of the choices is essentially impossible. On the other hand, collecting preference data on a small subset of the items is feasible, e.g., collecting answers to ``Do you prefer item A or item B?" or ``Is item A closer to item B or item C?". Therefore, an important machine learning task is to learn a ranking of the items based on this preference data. This thesis theoretically and empirically addresses three key challenges of preference learning: intransitivity in preference data, non-convex optimization, and algorithmic bias. Chapter 2 addresses the challenge of learning a ranking given pairwise comparison data that violates rational choice such as intransitivity. Our key observation is that two items compared in isolation from other items may be compared based on only a salient subset of features. Formalizing this framework, we propose the salient feature preference model and prove a sample complexity result for learning the parameters of our model and the underlying ranking with maximum likelihood estimation. Chapter 3 addresses the non-convexity of an optimization problem inspired by ordinal embedding, which is a preference learning task. We aim to understand the landscape, that is local minimizers and global minimizers, of the non-convex objective, which corresponds to the hinge loss arising from quadratic constraints. Under certain assumptions, we give necessary conditions for non-global, local minimizers of our objective and additionally show that in two dimensions, every local minimizer is a global minimizer. Chapters 4 and 5 address the challenge of algorithmic bias. We consider training machine learning models that are fair in the sense that their performance is invariant under certain sensitive perturbations to the inputs. For example, the performance of a resume screening system should be invariant under changes to the gender and ethnicity of the applicant. We formalize this notion of algorithmic fairness as a variant of individual fairness. In Chapter 4, we consider classification and develop a distributionally robust optimization approach, SenSR, that enforces this notion of individual fairness during training and provably learns individually fair classifiers. Chapter 5 builds upon Chapter 4. We develop a related algorithm, SenSTIR, to train provably individually fair learning-to-rank (LTR) models. The proposed approach ensures items from minority groups appear alongside similar items from majority groups. This notion of fair ranking is based on the individual fairness definition considered in Chapter 4 for the supervised learning context and is more nuanced than prior fair LTR approaches that simply provide underrepresented items with a basic level of exposure. The crux of our method is an optimal transport-based regularizer that enforces individual fairness and an efficient algorithm for optimizing the regularizer.
dc.language.isoen_US
dc.subjectalgorithmic fairness
dc.subjectranking
dc.subjectpreference learning
dc.subjectalgorithmic bias
dc.subjectnon-convex optimization
dc.subjectintransitivity
dc.titleDealing with Intransitivity, Non-Convexity, and Algorithmic Bias in Preference Learning
dc.typeThesis
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineApplied and Interdisciplinary Mathematics
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.contributor.committeememberBalzano, Laura Kathryn
dc.contributor.committeememberStrauss, Martin J
dc.contributor.committeememberSun, Yuekai
dc.contributor.committeememberBarvinok, Alexander
dc.subject.hlbsecondlevelEngineering (General)
dc.subject.hlbsecondlevelMathematics
dc.subject.hlbsecondlevelScience (General)
dc.subject.hlbsecondlevelStatistics and Numeric Data
dc.subject.hlbtoplevelEngineering
dc.subject.hlbtoplevelScience
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/166120/1/amandarg_1.pdf
dc.identifier.doihttps://dx.doi.org/10.7302/43
dc.identifier.orcid0000-0002-4497-3088
dc.identifier.name-orcidBower, Amanda; 0000-0002-4497-3088en_US
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.