Show simple item record

Enhancing the instruction fetching mechanism using data compression.

dc.contributor.authorChen, I-Cheng
dc.contributor.advisorMudge, Trevor
dc.date.accessioned2016-08-30T17:31:47Z
dc.date.available2016-08-30T17:31:47Z
dc.date.issued1997
dc.identifier.urihttp://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:9811050
dc.identifier.urihttps://hdl.handle.net/2027.42/130679
dc.description.abstractThe continually increasing speed of microprocessors stresses the need for ever faster instruction fetching rate. To sustain the full speed of microprocessors, the instruction fetching rate must increase proportionally to supply the instructions required. This dissertation applies data compression to improve several critical issues for fast instruction fetching: accurate branch prediction, high instruction cache hit rate, and high transfer bandwidth. First, we show how data compression relates to branch prediction and can be applied to improve prediction accuracy. Then we establish a theoretical basis for current branch predictors by showing that two-level adaptive branch predictors are simplifications of an optimal predictor in data compression, Prediction by Partial Matching (PPM). If the information provided to the predictor remains the same, it is unlikely that significant improvements can be expected (asymptotically) from two-level predictors, since PPM is optimal. We also applied data compression to improve instruction cache hit rates and to increase instruction fetch bandwidth. We introduced a technique to reduce instruction stream by compressing frequently encountered instruction sequences into single byte opcodes. Averaging across SPEC95 benchmarks, the bytes needed from level-1 cache were reduced by 50-70% and the bus cycles needed to fetch instructions from level-1 cache were reduced by 35-65%. In addition, a compression enhanced cache has a lower miss rate than a plain cache twice the size. Finally, to further increase cache hit rates, we proposed a prefetching scheme, which employs a branch predictor to run ahead of the execution unit and to prefetch potentially useful instructions. Branch prediction-based (BP-based) prefetching has a separate small fetching unit, allowing it to compute and predict targets autonomously. Our simulations show that a 4-issue machine with BP-based prefetching achieves higher performance than a plain cache 4 times the size. In addition, BP-based prefetching outperforms other hardware instruction fetching schemes, such as next-n line prefetching and wrong-path prefetching, by a factor of 17-44% in stall overhead and its bus traffic is very close to the best next-n line prefetching.
dc.format.extent125 p.
dc.languageEnglish
dc.language.isoEN
dc.subjectBandwidth
dc.subjectBranch Prediction
dc.subjectCompression
dc.subjectData
dc.subjectEnhancing
dc.subjectFetching
dc.subjectHit Rates
dc.subjectInstruction
dc.subjectMechanism
dc.subjectUsing
dc.titleEnhancing the instruction fetching mechanism using data compression.
dc.typeThesis
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineApplied Sciences
dc.description.thesisdegreedisciplineComputer science
dc.description.thesisdegreedisciplineElectrical engineering
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/130679/2/9811050.pdf
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.