An Information Theoretic Framework for Camera and Lidar Sensor Data Fusion and its Applications in Autonomous Navigation of Vehicles.
dc.contributor.author | Pandey, Gaurav | en_US |
dc.date.accessioned | 2014-06-02T18:16:23Z | |
dc.date.available | NO_RESTRICTION | en_US |
dc.date.available | 2014-06-02T18:16:23Z | |
dc.date.issued | 2014 | en_US |
dc.date.submitted | en_US | |
dc.identifier.uri | https://hdl.handle.net/2027.42/107286 | |
dc.description.abstract | This thesis develops an information theoretic framework for multi-modal sensor data fusion for robust autonomous navigation of vehicles. In particular we focus on the registration of 3D lidar and camera data, which are commonly used perception sensors in mobile robotics. This thesis presents a framework that allows the fusion of the two modalities, and uses this fused information to enhance state-of-the-art registration algorithms used in robotics applications. It is important to note that the time-aligned discrete signals (3D points and their reflectivity from lidar, and pixel location and color from camera) are generated by sampling the same physical scene, but in a different manner. Thus, although these signals look quite different at a high level (2D image from a camera looks entirely different than a 3D point cloud of the same scene from a lidar), since they are generated from the same physical scene, they are statistically dependent upon each other at the signal level. This thesis exploits this statistical dependence in an information theoretic framework to solve some of the common problems encountered in autonomous navigation tasks such as sensor calibration, scan registration and place recognition. In a general sense we consider these perception sensors as a source of information (i.e., sensor data), and the statistical dependence of this information (obtained from different modalities) is used to solve problems related to multi-modal sensor data registration. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | Computer Vision | en_US |
dc.subject | Robotics | en_US |
dc.subject | Sensor Calibration | en_US |
dc.subject | SLAM | en_US |
dc.subject | Scan Registration | en_US |
dc.subject | Sensor Fusion | en_US |
dc.title | An Information Theoretic Framework for Camera and Lidar Sensor Data Fusion and its Applications in Autonomous Navigation of Vehicles. | en_US |
dc.type | Thesis | en_US |
dc.description.thesisdegreename | PhD | en_US |
dc.description.thesisdegreediscipline | Electrical Engineering: Systems | en_US |
dc.description.thesisdegreegrantor | University of Michigan, Horace H. Rackham School of Graduate Studies | en_US |
dc.contributor.committeemember | Savarese, Silvio | en_US |
dc.contributor.committeemember | Eustice, Ryan M. | en_US |
dc.contributor.committeemember | Lee, Honglak | en_US |
dc.contributor.committeemember | Hero Iii, Alfred O. | en_US |
dc.subject.hlbsecondlevel | Electrical Engineering | en_US |
dc.subject.hlbtoplevel | Engineering | en_US |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/107286/1/pgaurav_1.pdf | |
dc.owningcollname | Dissertations and Theses (Ph.D. and Master's) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.