Toward Secure and Safe Autonomous Driving: an Adversary's Perspective
dc.contributor.author | Cao, Yulong | |
dc.date.accessioned | 2023-05-25T14:46:42Z | |
dc.date.available | 2023-05-25T14:46:42Z | |
dc.date.issued | 2023 | |
dc.date.submitted | 2023 | |
dc.identifier.uri | https://hdl.handle.net/2027.42/176634 | |
dc.description.abstract | Autonomous vehicles, also known as self-driving cars, are being developed at a rapid pace due to advances in machine learning. However, the real-world is complex and dynamic, with many different factors that can affect the performance of an autonomous driving (AD) system. Therefore, it is essential to thoroughly test and evaluate AD systems to ensure their safety and reliability in the open-world driving environment. Additionally, due to the high impact of AD systems on road safety, it is important to build robust AD systems that are resistant to adversaries. However, fully testing and exploiting AD systems can be challenging due to their complexity, as they consist of a combination of sensors, systems, and machine learning models. To address these challenges, my dissertation research focuses on building secure and safe AD systems through systematic analysis of attackers' capabilities. This involves testing AD systems as a whole, using realistic attacks, and discovering new security problems through proactive analysis. To achieve this goal, my dissertation starts by formulating realistic attacker capabilities against perception systems. Based on this, new attacks on perception systems are discovered that have different impacts (e.g., spoofing ghost objects or removing detected objects). We proposed two frameworks, adv-LiDAR and LiDAR-adv, that differentiate the LiDAR-based perception systems and generate effective adversarial examples automatically. As the result, we also demonstrated the proposed attacks can lead to vehicular-level impacts such as emergency braking or collisions. Next, causality analysis is conducted to expose the fundamental limitations of the system (e.g., large receptive fields introducing new attack vectors). This provides insights and guidelines for designing more robust systems in the future. By evaluating the adversarial robustness of different semantic segmentation models, we unveil the fundamental limitations of using large receptive fields. Specifically, we validate our findings using the remote adversarial patch (RAP) attack, which can mislead the prediction result of the target object without directly accessing and manipulating (adding) adversarial perturbation to it. Finally, solutions are developed to improve the modular and integrated robustness of the systems. By leveraging adversarial examples, the training dataset for machine learning models can be augmented to naturally improve modular robustness. We demonstrated that, with robust trained trajectory prediction models, AD systems can avoid collisions under adversarial attacks. On the other hand, using insights from the causality analysis and formulated attacker capabilities, AD systems with enhanced integrated robustness can be designed. | |
dc.language.iso | en_US | |
dc.subject | Secure and Safe Autonomous Driving Systems | |
dc.subject | Cyber Physical System Security | |
dc.title | Toward Secure and Safe Autonomous Driving: an Adversary's Perspective | |
dc.type | Thesis | |
dc.description.thesisdegreename | PhD | en_US |
dc.description.thesisdegreediscipline | Computer Science & Engineering | |
dc.description.thesisdegreegrantor | University of Michigan, Horace H. Rackham School of Graduate Studies | |
dc.contributor.committeemember | Mao, Z Morley | |
dc.contributor.committeemember | Liu, Mingyan | |
dc.contributor.committeemember | Fu, Kevin | |
dc.contributor.committeemember | Prakash, Atul | |
dc.contributor.committeemember | Xiao, Chaowei | |
dc.subject.hlbsecondlevel | Computer Science | |
dc.subject.hlbtoplevel | Engineering | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/176634/1/yulongc_1.pdf | |
dc.identifier.doi | https://dx.doi.org/10.7302/7483 | |
dc.identifier.orcid | 0000-0003-3007-2550 | |
dc.identifier.name-orcid | Cao, Yulong; 0000-0003-3007-2550 | en_US |
dc.working.doi | 10.7302/7483 | en |
dc.owningcollname | Dissertations and Theses (Ph.D. and Master's) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.