Show simple item record

Toward Secure and Safe Autonomous Driving: an Adversary's Perspective

dc.contributor.authorCao, Yulong
dc.date.accessioned2023-05-25T14:46:42Z
dc.date.available2023-05-25T14:46:42Z
dc.date.issued2023
dc.date.submitted2023
dc.identifier.urihttps://hdl.handle.net/2027.42/176634
dc.description.abstractAutonomous vehicles, also known as self-driving cars, are being developed at a rapid pace due to advances in machine learning. However, the real-world is complex and dynamic, with many different factors that can affect the performance of an autonomous driving (AD) system. Therefore, it is essential to thoroughly test and evaluate AD systems to ensure their safety and reliability in the open-world driving environment. Additionally, due to the high impact of AD systems on road safety, it is important to build robust AD systems that are resistant to adversaries. However, fully testing and exploiting AD systems can be challenging due to their complexity, as they consist of a combination of sensors, systems, and machine learning models. To address these challenges, my dissertation research focuses on building secure and safe AD systems through systematic analysis of attackers' capabilities. This involves testing AD systems as a whole, using realistic attacks, and discovering new security problems through proactive analysis. To achieve this goal, my dissertation starts by formulating realistic attacker capabilities against perception systems. Based on this, new attacks on perception systems are discovered that have different impacts (e.g., spoofing ghost objects or removing detected objects). We proposed two frameworks, adv-LiDAR and LiDAR-adv, that differentiate the LiDAR-based perception systems and generate effective adversarial examples automatically. As the result, we also demonstrated the proposed attacks can lead to vehicular-level impacts such as emergency braking or collisions. Next, causality analysis is conducted to expose the fundamental limitations of the system (e.g., large receptive fields introducing new attack vectors). This provides insights and guidelines for designing more robust systems in the future. By evaluating the adversarial robustness of different semantic segmentation models, we unveil the fundamental limitations of using large receptive fields. Specifically, we validate our findings using the remote adversarial patch (RAP) attack, which can mislead the prediction result of the target object without directly accessing and manipulating (adding) adversarial perturbation to it. Finally, solutions are developed to improve the modular and integrated robustness of the systems. By leveraging adversarial examples, the training dataset for machine learning models can be augmented to naturally improve modular robustness. We demonstrated that, with robust trained trajectory prediction models, AD systems can avoid collisions under adversarial attacks. On the other hand, using insights from the causality analysis and formulated attacker capabilities, AD systems with enhanced integrated robustness can be designed.
dc.language.isoen_US
dc.subjectSecure and Safe Autonomous Driving Systems
dc.subjectCyber Physical System Security
dc.titleToward Secure and Safe Autonomous Driving: an Adversary's Perspective
dc.typeThesis
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineComputer Science & Engineering
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.contributor.committeememberMao, Z Morley
dc.contributor.committeememberLiu, Mingyan
dc.contributor.committeememberFu, Kevin
dc.contributor.committeememberPrakash, Atul
dc.contributor.committeememberXiao, Chaowei
dc.subject.hlbsecondlevelComputer Science
dc.subject.hlbtoplevelEngineering
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/176634/1/yulongc_1.pdf
dc.identifier.doihttps://dx.doi.org/10.7302/7483
dc.identifier.orcid0000-0003-3007-2550
dc.identifier.name-orcidCao, Yulong; 0000-0003-3007-2550en_US
dc.working.doi10.7302/7483en
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.