Show simple item record

Robustness of Fairness in Machine Learning

dc.contributor.authorKamp, Serafina
dc.contributor.authorLuis Li Zhao, Andong
dc.contributor.authorKutty, Sindhu
dc.contributor.advisorKutty, Sindhu
dc.date.accessioned2023-05-26T17:55:01Z
dc.date.available2023-05-26T17:55:01Z
dc.date.issued2022
dc.identifier.urihttps://hdl.handle.net/2027.42/176722
dc.description.abstractAs machine learning algorithms become widely used in society, certain subgroups are more at risk of being harmed by unfair treatment. Fairness metrics have been proposed to quantify this harm by measuring certain statistics with respect to an evaluation dataset. In this work, we seek to analyze how robust these metrics are. That is, we are interested in whether these metrics give the same ``fairness score'' when measured on different sets of samples from the same distribution. This is important because it gives us insight into how much we can trust the conclusions given by a fairness metric prior to deployment of a model. We design a framework to conduct experiments to test the robustness of a popular fairness metric. We find that, when compared to more traditional performance metrics, it is more sensitive to fluctuations in the evaluation dataset in a variety of settings. Additionally, our work provides a foundation for studying the robustness of fairness metrics in general.
dc.subjectmachine learning
dc.subjectrobustness
dc.subjectfairness
dc.titleRobustness of Fairness in Machine Learning
dc.typeProject
dc.subject.hlbtoplevelEngineering
dc.description.peerreviewedNA
dc.contributor.affiliationumComputer Science Engineering
dc.contributor.affiliationumcampusAnn Arbor
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/176722/1/Honors_Capstone_Fairness_ML_-_Serafina_Kamp.pdf
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/176722/2/Honors_Capstone_Fairness_Poster_-_Serafina_Kamp.pptx
dc.identifier.doihttps://dx.doi.org/10.7302/7571
dc.working.doi10.7302/7571en
dc.owningcollnameHonors Program, The College of Engineering


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.