Show simple item record

Computer Architectures for Mobile Computer Vision Systems.

dc.contributor.authorClemons, Jason Lavaren_US
dc.date.accessioned2013-06-12T14:14:55Z
dc.date.availableNO_RESTRICTIONen_US
dc.date.available2013-06-12T14:14:55Z
dc.date.issued2013en_US
dc.date.submitted2013en_US
dc.identifier.urihttps://hdl.handle.net/2027.42/97782
dc.description.abstractMobile vision is enabling many new applications such as face recognition and augmented reality. However, the performance of mobile processors is limiting the capability of mobile vision computing. This dissertation presents an in-depth analysis of mobile computer vision applications and proposes novel hardware and software optimizations with the goal to increase mobile computer vision processing capability. We present the Michigan Visual Sonification System, a new mobile vision application that provides navigational aid to the visually impaired. The development of this application gives insights into the nature of mobile vision applications including the tradeoffs between performance and energy on mobile processors. We then present MEVBench, a mobile vision benchmark suite that we built to determine the computational characteristics of various mobile vision kernels. This analysis exposes the vector reduction operations, the imbalanced task or thread parallelism and the 2D spatial locality in memory accesses, all which we exploit in the pursuit of highly efficient mobile vision architectures. Armed with a deeper understanding of computer vision processing, the core of this thesis focuses on software and hardware based optimization to improve the efficiency of mobile vision processing. We begin the optimization with a software optimization known as Single Eigenvector Solver (SEVS), an algorithm that reduces the computation for augmented reality applications. We begin the hardware optimizations with EFFEX, a heterogeneous multicore architecture that utilizes vector reduction functional units and a 2D memory controller to improve the efficiency of feature extraction. We close with the Efficient Vision Architecture (EVA). EVA expands the EFFEX architecture by adding more custom accelerators for vision operations beyond feature extraction. It also utilizes the tile cache to allow for both 1D and 2D spatial locality in cache accesses. Overall, this dissertation demonstrates that an application specific approach to processor design can create a flexible programmable design with significant efficiency improvements in mobile vision performance when compared to currently available mobile processors. These works enable the development of richer more capable mobile vision systems.en_US
dc.language.isoen_USen_US
dc.subjectMobileen_US
dc.subjectComputer Architectureen_US
dc.subjectComputer Visionen_US
dc.titleComputer Architectures for Mobile Computer Vision Systems.en_US
dc.typeThesisen_US
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineComputer Science & Engineeringen_US
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studiesen_US
dc.contributor.committeememberAustin, Todd M.en_US
dc.contributor.committeememberSavarese, Silvioen_US
dc.contributor.committeememberWenisch, Thomas F.en_US
dc.contributor.committeememberMahlke, Scotten_US
dc.subject.hlbsecondlevelComputer Scienceen_US
dc.subject.hlbtoplevelEngineeringen_US
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/97782/1/jclemons_1.pdf
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.