Machine recognition and attitude estimation of three-dimensional objects in intensity images.
dc.contributor.author | Gottschalk, Paul Gunther, III | |
dc.contributor.advisor | Mudge, Trevor | |
dc.date.accessioned | 2016-08-30T16:51:34Z | |
dc.date.available | 2016-08-30T16:51:34Z | |
dc.date.issued | 1990 | |
dc.identifier.uri | http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:9034427 | |
dc.identifier.uri | https://hdl.handle.net/2027.42/128567 | |
dc.description.abstract | Previous approaches to recognition and attitude determination have made assumptions that limit their generality. The most subtle and pervasive such assumption is that 3-d objects always possesses points, curves, or regions on their surfaces that reliably produce certain types of features when the objects are imaged. For example, the edges of a cube, and similar polyhedral objects, do reliably produce linear edge contours in an image. However, this assumption, called the object-attached feature assumption, fails for objects that possess smooth surfaces. The work in this thesis focuses on the recognition and pose determination of 3-d objects in intensity images under realistic conditions without invoking the object-attached feature assumption. A complete framework for recognition for objects that have any shape is developed in this thesis. In particular, smooth and non-smooth objects are handled uniformly. At the heart of the framework is an attitude determination method that is based on numerically minimizing, over the viewing parameters, a carefully constructed measure of the disparity between the shape of the 2-d detected edge contours, derived from the image, and the shape of the 2-d predicted edge contours, derived from the 3-d model. Experimental results show that the method is both robust and efficient. Another key part of the recognition framework is the problem of obtaining initial hypotheses about the identity and viewing parameters of objects that may appear in the image. This is the task of the hypothesis generator. This thesis describes an approach to hypothesis generation that also adheres to the philosophy of not invoking the object-attached feature assumption, and is therefore applicable to objects of any shape. To accomplish this and keep the efficiency of the hypothesis generator high enough to be practical, a fast, associative, multiview, model database based on k-d trees was developed. Lastly, the issues involved in using features to represent shape for hypothesis generation are analysed. The conclusions lead to a new feature-based shape representation for hypothesis generation. | |
dc.format.extent | 344 p. | |
dc.language | English | |
dc.language.iso | EN | |
dc.subject | Attitude | |
dc.subject | Dimensional | |
dc.subject | Estimation | |
dc.subject | Images | |
dc.subject | Intensity | |
dc.subject | Machine | |
dc.subject | Objects | |
dc.subject | Recognition | |
dc.subject | Three | |
dc.title | Machine recognition and attitude estimation of three-dimensional objects in intensity images. | |
dc.type | Thesis | |
dc.description.thesisdegreename | PhD | en_US |
dc.description.thesisdegreediscipline | Applied Sciences | |
dc.description.thesisdegreediscipline | Computer science | |
dc.description.thesisdegreegrantor | University of Michigan, Horace H. Rackham School of Graduate Studies | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/128567/2/9034427.pdf | |
dc.owningcollname | Dissertations and Theses (Ph.D. and Master's) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.