Show simple item record

Towards Trustworthy Machine Learning on Graph Data

dc.contributor.authorMa, Jiaqi
dc.date.accessioned2022-09-06T15:59:07Z
dc.date.available2022-09-06T15:59:07Z
dc.date.issued2022
dc.date.submitted2022
dc.identifier.urihttps://hdl.handle.net/2027.42/174201
dc.description.abstractMachine learning has been applied to more and more socially-relevant scenarios that influence our daily lives, ranging from social media and e-commerce to self-driving cars and criminal justice. It is therefore crucial to develop trustworthy machine learning methods that perform reliably, in order to avoid negative impacts on individuals and society. In this dissertation, we focus on understanding and improving the trustworthiness of graph machine learning, which poses unique challenges due to the complex relational structure of the graph data. In particular, we view the trustworthiness of a machine learning model as being reliable under exceptional conditions. For example, the performance of a machine learning model should not degrade seriously under adversarial attacks or on a subpopulation, which respectively corresponds to the problems of adversarial robustness or fairness. The unique challenges for trustworthy graph machine learning are that there are many more complicated and sometimes implicit exceptional conditions in the context of graph data. This dissertation identifies under-explored exceptional conditions, understands the expected model behavior under the identified exceptional conditions, and improves the existing models under such exceptional conditions. Specifically, we focus on graph neural networks (GNNs), a family of popular graph machine learning models that leverage recent advances in deep learning. In this dissertation, we identify three exceptional conditions of GNNs. First, we study the adversarial robustness of GNNs with a new and practical threat model inspired by social network application scenarios, and investigate when and why GNNs may suffer from the adversarial attacks. Second, we find that existing GNNs can be misspecified for many real-world graph data and develop a novel framework to improve existing models. Finally, we discover a type of unfairness of GNN predictions among subpopulations of test nodes that is relevant to the structural positions of the nodes. We also propose an active learning framework to mitigate the unfairness problem.
dc.language.isoen_US
dc.subjectTrustworthy Machine Learning
dc.subjectGraph Machine Learning
dc.titleTowards Trustworthy Machine Learning on Graph Data
dc.typeThesis
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineInformation
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.contributor.committeememberMei, Qiaozhu
dc.contributor.committeememberZhu, Ji
dc.contributor.committeememberKoutra, Danai
dc.contributor.committeememberRomero, Daniel M
dc.subject.hlbsecondlevelComputer Science
dc.subject.hlbtoplevelEngineering
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/174201/1/jiaqima_1.pdf
dc.identifier.doihttps://dx.doi.org/10.7302/5932
dc.identifier.orcid0000-0001-8292-5901
dc.identifier.name-orcidMa, Jiaqi; 0000-0001-8292-5901en_US
dc.working.doi10.7302/5932en
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.