Domain-Specific Computing Architectures and Paradigms
dc.contributor.author | Lee, Ching-En | |
dc.date.accessioned | 2020-10-04T23:20:42Z | |
dc.date.available | NO_RESTRICTION | |
dc.date.available | 2020-10-04T23:20:42Z | |
dc.date.issued | 2020 | |
dc.identifier.uri | https://hdl.handle.net/2027.42/162870 | |
dc.description.abstract | We live in an exciting era where artificial intelligence (AI) is fundamentally shifting the dynamics of industries and businesses around the world. AI algorithms such as deep learning (DL) have drastically advanced the state-of-the-art cognition and learning capabilities. However, the power of modern AI algorithms can only be enabled if the underlying domain-specific computing hardware can deliver orders of magnitude more performance and energy efficiency. This work focuses on this goal and explores three parts of the domain-specific computing acceleration problem; encapsulating specialized hardware and software architectures and paradigms that support the ever-growing processing demand of modern AI applications from the edge to the cloud. This first part of this work investigates the optimizations of a sparse spatio-temporal (ST) cognitive system-on-a-chip (SoC). This design extracts ST features from videos and leverages sparse inference and kernel compression to efficiently perform action classification and motion tracking. The second part of this work explores the significance of dataflows and reduction mechanisms for sparse deep neural network (DNN) acceleration. This design features a dynamic, look-ahead index matching unit in hardware to efficiently discover fine-grained parallelism, achieving high energy efficiency and low control complexity for a wide variety of DNN layers. Lastly, this work expands the scope to real-time machine learning (RTML) acceleration. A new high-level architecture modeling framework is proposed. Specifically, this framework consists of a set of high-performance RTML-specific architecture design templates, and a Python-based high-level modeling and compiler tool chain for efficient cross-stack architecture design and exploration. | |
dc.language.iso | en_US | |
dc.subject | AI, domain-specific computing, hardware acceleration, integrated circuit design, processor architecture, software architecture | |
dc.title | Domain-Specific Computing Architectures and Paradigms | |
dc.type | Thesis | |
dc.description.thesisdegreename | PhD | en_US |
dc.description.thesisdegreediscipline | Electrical and Computer Engineering | |
dc.description.thesisdegreegrantor | University of Michigan, Horace H. Rackham School of Graduate Studies | |
dc.contributor.committeemember | Zhang, Zhengya | |
dc.contributor.committeemember | Das, Reetuparna | |
dc.contributor.committeemember | Flynn, Michael | |
dc.contributor.committeemember | Lu, Wei | |
dc.subject.hlbsecondlevel | Electrical Engineering | |
dc.subject.hlbtoplevel | Engineering | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/162870/1/lchingen_1.pdf | en |
dc.identifier.orcid | 0000-0002-5130-8166 | |
dc.identifier.name-orcid | Lee, Ching-En; 0000-0002-5130-8166 | en_US |
dc.owningcollname | Dissertations and Theses (Ph.D. and Master's) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.