Towards Trustworthy Machine Learning on Graph Data
dc.contributor.author | Ma, Jiaqi | |
dc.date.accessioned | 2022-09-06T15:59:07Z | |
dc.date.available | 2022-09-06T15:59:07Z | |
dc.date.issued | 2022 | |
dc.date.submitted | 2022 | |
dc.identifier.uri | https://hdl.handle.net/2027.42/174201 | |
dc.description.abstract | Machine learning has been applied to more and more socially-relevant scenarios that influence our daily lives, ranging from social media and e-commerce to self-driving cars and criminal justice. It is therefore crucial to develop trustworthy machine learning methods that perform reliably, in order to avoid negative impacts on individuals and society. In this dissertation, we focus on understanding and improving the trustworthiness of graph machine learning, which poses unique challenges due to the complex relational structure of the graph data. In particular, we view the trustworthiness of a machine learning model as being reliable under exceptional conditions. For example, the performance of a machine learning model should not degrade seriously under adversarial attacks or on a subpopulation, which respectively corresponds to the problems of adversarial robustness or fairness. The unique challenges for trustworthy graph machine learning are that there are many more complicated and sometimes implicit exceptional conditions in the context of graph data. This dissertation identifies under-explored exceptional conditions, understands the expected model behavior under the identified exceptional conditions, and improves the existing models under such exceptional conditions. Specifically, we focus on graph neural networks (GNNs), a family of popular graph machine learning models that leverage recent advances in deep learning. In this dissertation, we identify three exceptional conditions of GNNs. First, we study the adversarial robustness of GNNs with a new and practical threat model inspired by social network application scenarios, and investigate when and why GNNs may suffer from the adversarial attacks. Second, we find that existing GNNs can be misspecified for many real-world graph data and develop a novel framework to improve existing models. Finally, we discover a type of unfairness of GNN predictions among subpopulations of test nodes that is relevant to the structural positions of the nodes. We also propose an active learning framework to mitigate the unfairness problem. | |
dc.language.iso | en_US | |
dc.subject | Trustworthy Machine Learning | |
dc.subject | Graph Machine Learning | |
dc.title | Towards Trustworthy Machine Learning on Graph Data | |
dc.type | Thesis | |
dc.description.thesisdegreename | PhD | en_US |
dc.description.thesisdegreediscipline | Information | |
dc.description.thesisdegreegrantor | University of Michigan, Horace H. Rackham School of Graduate Studies | |
dc.contributor.committeemember | Mei, Qiaozhu | |
dc.contributor.committeemember | Zhu, Ji | |
dc.contributor.committeemember | Koutra, Danai | |
dc.contributor.committeemember | Romero, Daniel M | |
dc.subject.hlbsecondlevel | Computer Science | |
dc.subject.hlbtoplevel | Engineering | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/174201/1/jiaqima_1.pdf | |
dc.identifier.doi | https://dx.doi.org/10.7302/5932 | |
dc.identifier.orcid | 0000-0001-8292-5901 | |
dc.identifier.name-orcid | Ma, Jiaqi; 0000-0001-8292-5901 | en_US |
dc.working.doi | 10.7302/5932 | en |
dc.owningcollname | Dissertations and Theses (Ph.D. and Master's) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.