Reliable Reinforcement Learning for Decision-Making in Autonomous Driving
dc.contributor.author | Wen, Lu | |
dc.date.accessioned | 2024-09-03T18:40:34Z | |
dc.date.available | 2024-09-03T18:40:34Z | |
dc.date.issued | 2024 | |
dc.date.submitted | 2024 | |
dc.identifier.uri | https://hdl.handle.net/2027.42/194606 | |
dc.description.abstract | Autonomous driving technology has made significant strides due to advancements in artificial intelligence, sensor technology, and computational power. However, deploying autonomous vehicles (AVs) in real-world scenarios remains challenging due to safety concerns, the need for generalizability across diverse environments, and the demand for interpretable decision-making processes. This dissertation addresses these challenges by developing reliable reinforcement learning (RL) algorithms tailored for autonomous driving decision-making, providing a comprehensive framework for creating robust RL-based solutions and bridging the gap towards safer and more efficient autonomous transportation systems. First, we introduce a safe-RL-based solution that ensures safety during both the training and deployment phases. This is achieved by formulating the learning problem as a constrained optimization problem and applying a parallel training strategy to enhance training efficiency and the likelihood of achieving an optimal policy. Second, we present meta-RL-based solutions designed to enhance the generalizability of policies. By incorporating safety into the exploration of prior policies, we ensure the safety of the policy before adapting to new tasks. Additionally, leveraging task interpolation and data augmentation improves the data efficiency of current meta-RL techniques while maintaining the same level of generalization performance. Third, we propose an interpretable decision-making solution through an intention-aware decision-making approach that uses a hierarchical architecture to generate driving intentions and corresponding trajectories. This approach improves the interpretability of the decision-making process, facilitating better interaction with surrounding traffic participants and enhancing overall system performance. The effectiveness of these contributions is demonstrated through a series of experiments in simulated environments and datasets, focusing on tasks such as lane-keeping, intersection crossing, and highway merging. Our results show significant improvements in safety, generalizability, and interpretability, bridging the gap between simulation-based RL approaches and real-world deployment. | |
dc.language.iso | en_US | |
dc.subject | autonomous driving | |
dc.subject | reinforcement learning | |
dc.title | Reliable Reinforcement Learning for Decision-Making in Autonomous Driving | |
dc.type | Thesis | |
dc.description.thesisdegreename | PhD | |
dc.description.thesisdegreediscipline | Mechanical Engineering | |
dc.description.thesisdegreegrantor | University of Michigan, Horace H. Rackham School of Graduate Studies | |
dc.contributor.committeemember | Liu, Mingyan | |
dc.contributor.committeemember | Girard, Anouck Renee | |
dc.contributor.committeemember | Peng, Huei | |
dc.contributor.committeemember | Orosz, Gabor | |
dc.contributor.committeemember | Vasudevan, Ram | |
dc.subject.hlbsecondlevel | Mechanical Engineering | |
dc.subject.hlbtoplevel | Engineering | |
dc.contributor.affiliationumcampus | Ann Arbor | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/194606/1/lulwen_1.pdf | |
dc.identifier.doi | https://dx.doi.org/10.7302/23954 | |
dc.identifier.orcid | 0000-0002-8197-8195 | |
dc.identifier.name-orcid | WEN, LU; 0000-0002-8197-8195 | en_US |
dc.working.doi | 10.7302/23954 | en |
dc.owningcollname | Dissertations and Theses (Ph.D. and Master's) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.