Constructing environment maps with an active vision system through information assimilation.
dc.contributor.author | Tirumalai, Arun P. | |
dc.contributor.advisor | Schunck, Brian G. | |
dc.date.accessioned | 2016-08-30T16:54:25Z | |
dc.date.available | 2016-08-30T16:54:25Z | |
dc.date.issued | 1991 | |
dc.identifier.uri | http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:9124125 | |
dc.identifier.uri | https://hdl.handle.net/2027.42/128719 | |
dc.description.abstract | This dissertation describes an approach for constructing an environment map by multi-sensory information assimilation. The approach has been implemented using a binocular stereo vision system mounted on a mobile robot. The focus of this work is on the development of a complete system for constructing an environment map, dealing with issues of sensor modeling, data acquisition, data transformation, data registration, data fusion, robustness, and computational tractability. We draw on several fundamental results from estimation theory and robust statistics. We take the view that the approach used in a given situation should be determined by the application and the nature of the data that can be extracted from the available sensors. We first address the problem of building a coarse map of the environment, in the form of a certainty grid, using dynamic stereo. We present an approach for multi-sensory depth information assimilation based on Dempster-Shafer theory for evidential reasoning. This approach is suited for assimilating depth related information from physically different sensors. This approach provides a mechanism to explicitly model ignorance which is desirable when dealing with an unknown environment and offers some representational advantages over the traditional Bayesian approach. We then deal with the problem of constructing high-resolution environment maps from pixel-level data. The representation sought here is in the form of a dense stereo disparity map which is suitable for highly textured scenes. This representation could be useful for extracting higher level features, in particular surface patches. Finally, we deal with the problem of recovering a wire-frame description of an environment which is suitable for scenes containing mostly polyhedral objects. We present a segment-based stereo matching algorithm designed for dynamic stereo sequences. This algorithm utilizes a belief function based approach to computer a reliable wire-frame description. This representation combined with geometric reasoning and model-based vision techniques could be useful for object recognition tasks. | |
dc.format.extent | 200 p. | |
dc.language | English | |
dc.language.iso | EN | |
dc.subject | Active | |
dc.subject | Assimilation | |
dc.subject | Constructing | |
dc.subject | Environment | |
dc.subject | Information | |
dc.subject | Maps | |
dc.subject | Object Recognition | |
dc.subject | Robotics | |
dc.subject | System | |
dc.subject | Vision | |
dc.title | Constructing environment maps with an active vision system through information assimilation. | |
dc.type | Thesis | |
dc.description.thesisdegreename | PhD | en_US |
dc.description.thesisdegreediscipline | Applied Sciences | |
dc.description.thesisdegreediscipline | Computer science | |
dc.description.thesisdegreediscipline | Electrical engineering | |
dc.description.thesisdegreegrantor | University of Michigan, Horace H. Rackham School of Graduate Studies | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/128719/2/9124125.pdf | |
dc.owningcollname | Dissertations and Theses (Ph.D. and Master's) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.