Show simple item record

3D Holographic Freespace Control Unit (HFCU): Design, Implementation, and Analysis

dc.contributor.authorFerguson, Holly Tina
dc.contributor.advisorTurner, Stephen
dc.date.accessioned2016-05-09T16:30:17Z
dc.date.available2016-05-09T16:30:17Z
dc.date.issued2013-04-24
dc.identifier.urihttps://hdl.handle.net/2027.42/118034
dc.description.abstractRecent developments in the research area of gesture recognition (Kinect and Wii, for example) and holographic displays (Vermeer, Microsoft) have facilitated the environments that make interactive touchscreen technologies a possibility. Instead of working towards the substantial yet common area of this category of research that works with gesture-recognition environments tailored to be used with image manipulation or resizing, this project uses these in conjunction with holographic mediums for controlling OS GUI/APIs and removing the need for keyboards, computer mice, etc. <p>The elements of this project include the code and built hardware necessary to accomplish a multi-faceted, 3D, interactive display system which will provide convenient levels of control of a Windows interface/GUI/API free of contact with the physical hardware; it does so via the usage of a visually bounded freespace both with and without holographic assistance. The sequence of the phases of work used to achieve this goal include two major components-the 3D Holographic Freespace Control Unit (HFCU) used for typing characters for this project, and the Gesture-Controlled 3D Interface FrespaceI (GCIF) used for other functionalities. Both of these components are defined in isolation and then incorporated together for the result. <p>The first of the two components-HFCU-is created by first developing code to successfully established with one sensor to detect linear distance to a given object. When this program takes in data and brings up a correct, associated image then the rest of the hardware for a controller device to hold the imagery is built. The construction of this device will consist of an array of linearly adjoining slices of concave mirrored surfaces with opening at teh top and bottom for the manipulation of virtual imagery. The original code can then be altered to react to a set of swapping arrays before it is combined with and/or connected to the API/GUI data of a basic Windows system. At this point, the second component-GCIF-is created. Connections are established between the computer screen and human movements (taking advantage of existing gesture recognition algorithms) and these are used to manipulate the same API/GUI. This is built with a Kinect sensor by first programming one control/gesture or using an existing algorithm from a Kinect Library and connecting it with the general Windows API functionalities. The previously independent control device used for typing characters is now incorporated and requires collaboration of the two types of interactions. After this work is completed with one screen, the setup can be applied and tested with multiple screens creating a vusually-defined, user-occupyable space. Currently (2012-2013), the experimental functionalities utilized for this project include a basic selection of the alphabet for typing with the holographic control unit, and the mouse click selection function as well as window resizing and/or movement for the 3D gesture freespace. <p>The results of this project are both engaging yet developmental, mostly due to the limitations of available hardware. However, this neither overwhelms the merit if the research nor what the implmentation methods allows at this time. This type of work may also benefit a number of other professions by providing new methods of construction, navigation, etc. For a more complete summary of potential uses, see the List of Contributions in section 9.0. This project presents a solution for certain physical strains and limitations that traditional laptops and desktops exert on users such as rigid hand angles for typing. This prototype demonstrates itself as potentially a part of a larger system completely run and controlled without having to be in physical contact with the user. Future work may be inclusive of additional API/GUI functionalities or may strive to develop different gesture-recognitions or even physical spatial arrangements. Voice recognition could also be useful in future work, if this system were developed to also include this ability.
dc.subject3D Holographic Freespace Control Unit
dc.subjectgesture recognition
dc.subjectholographic display
dc.subjectGesture-controlled 3D Interface Freespace
dc.title3D Holographic Freespace Control Unit (HFCU): Design, Implementation, and Analysis
dc.typeThesis
dc.description.thesisdegreenameMaster's
dc.description.thesisdegreedisciplineCollege of Arts and Sciences: Computer Science
dc.description.thesisdegreegrantorUniversity of Michigan
dc.contributor.committeememberTurner, Stephen
dc.contributor.committeememberFarmer, Michael
dc.contributor.affiliationumcampusFlint
dc.identifier.uniqnamehollyfe
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/118034/1/Ferguson.pdf
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.