Show simple item record

Designing and Evaluating Physical Adversarial Attacks and Defenses for Machine Learning Algorithms

dc.contributor.authorEykholt, Kevin
dc.date.accessioned2020-01-27T16:23:31Z
dc.date.availableNO_RESTRICTION
dc.date.available2020-01-27T16:23:31Z
dc.date.issued2019
dc.date.submitted2019
dc.identifier.urihttps://hdl.handle.net/2027.42/153373
dc.description.abstractStudies show that state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input in a calculated fashion. These perturbations induce mistakes in the network's output. However, despite the large interest and numerous works, there have only been limited studies on the impact of adversarial attacks in the physical world. Furthermore, these studies lack well-developed, robust methodologies for attacking real physical systems. In this dissertation, we first explore the technical requirements for generating physical adversarial inputs through the manipulation of physical objects. Based on our analysis, we design a new adversarial attack algorithm, Robust Physical Perturbations (RPP) that consistently computes the necessary modifications to ensure the modified object remains adversarial across numerous varied viewpoints. We show that the RPP attack results in physical adversarial inputs for classification tasks as well as object detection tasks, which, prior to our work, were considered to be resistant. We, then, develop a defensive technique, robust feature augmentation, to mitigate the effect of adversarial inputs, both digitally and physically. We hypothesize the input to a machine learning algorithm contains predictive feature information that a bounded adversary is unable to manipulate in order to cause classification errors. By identifying and extracting this adversarially robust feature information, we can obtain evidence of the possible set of correct output labels and adjust the classification decision accordingly. As adversarial inputs are a human-defined phenomenon, we utilize human-recognizable features to identify adversarially robust, predictive feature information for a given problem domain. Due to the safety-critical nature of autonomous driving, we focus our study on traffic sign classification and localization tasks.
dc.language.isoen_US
dc.subjectMachine Learning
dc.subjectSecurity
dc.titleDesigning and Evaluating Physical Adversarial Attacks and Defenses for Machine Learning Algorithms
dc.typeThesis
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineComputer Science & Engineering
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.contributor.committeememberPrakash, Atul
dc.contributor.committeememberKamat, Vineet Rajendra
dc.contributor.committeememberLi, Bo
dc.contributor.committeememberMao, Z Morley
dc.subject.hlbsecondlevelComputer Science
dc.subject.hlbtoplevelEngineering
dc.description.bitstreamurlhttps://deepblue.lib.umich.edu/bitstream/2027.42/153373/1/keykholt_1.pdf
dc.identifier.orcid0000-0002-7040-1657
dc.identifier.name-orcidEykholt, Kevin; 0000-0002-7040-1657en_US
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.