Show simple item record

Adversarial Approximation of a Black-Box Malware Detector

dc.contributor.authorAli, Abdullah
dc.contributor.advisorEshete, Birhanu
dc.date.accessioned2020-01-06T18:51:29Z
dc.date.availableNO_RESTRICTIONen_US
dc.date.available2020-01-06T18:51:29Z
dc.date.issued2019-12-14
dc.date.submitted2019-12-13
dc.identifier.urihttps://hdl.handle.net/2027.42/152459
dc.description.abstractA deployed machine learning-based malware detection model is effectively a black-box for an adversary whose objective is evading the model. In such a set-ting, the adversary has no access to details of the black-box except its prediction on a given input. With such limited leverage, the adversary has no choice but to explore avenues to infer the model’s decision boundary, based on which adversarial inputs are crafted to evade it. Inferring the best approximation of a black-box model’s decision boundary is a non-trivial exercise for which an exact solution is unattainable. This is because there are exponentially many combinations of model architectures, parameters, and training examples to explore. In this context, the adversary prefers an optimal strategy that yields the best approximation of the black-box with minimal effort. This thesis presents a novel adversarial approximation approach for a black-box malware detector. Beginning with publicly accessible input-set for the black-box model, our approach leverages the recent advances in image transformation for deep neural networks and transferability of knowledge from publicly available pre-trained models to obtain an acceptable approximation of a black-box malware detector. Experimental evaluation of our approach against a 93% black-box model trained on raw-byte sequence features of benign and malware Windows executables achieves up to 92% accurate approximator that leverages the Inception V3 pre-trained model. On a comparison dataset disjoint with the black-box’s and the approximator’s training sets,our approach achieved 90.1% similarity between the target black-box and the approximated model, showing the viability of our approach for approximation of black-box malware detectors with optimal effort.en_US
dc.language.isoen_USen_US
dc.subjectSecurityen_US
dc.subjectAdversarial machine learningen_US
dc.subjectMalware detectionen_US
dc.subjectBlackboxen_US
dc.subject.otherComputer Scienceen_US
dc.titleAdversarial Approximation of a Black-Box Malware Detectoren_US
dc.typeThesisen_US
dc.description.thesisdegreenameMaster of Science (MS)en_US
dc.description.thesisdegreedisciplineComputer and Information Science, College of Engineering & Computer Scienceen_US
dc.description.thesisdegreegrantorUniversity of Michigan-Dearbornen_US
dc.contributor.committeememberMa, Di
dc.contributor.committeememberProbir, Roy
dc.identifier.uniqname3396 2831en_US
dc.description.bitstreamurlhttps://deepblue.lib.umich.edu/bitstream/2027.42/152459/1/Abdullah Ali - Final Thesis.pdf
dc.identifier.orcid0000-0003-4058-3397en_US
dc.description.filedescriptionDescription of Abdullah Ali - Final Thesis.pdf : Thesis
dc.identifier.name-orcidAli, Abdullah; 0000-0003-4058-3397en_US
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.