Modulo scheduling with cache reuse information
Document Type
Conference Proceeding
Publication Date
1997
Department
Department of Computer Science
Abstract
Software pipelining for instruction-level parallel computers with non-blocking caches usually assigns memory access latency by assuming either all accesses are cache hits or all are cache misses. We contend setting memory latencies by cache reuse analysis leads to better software pipelining than either an all-hit or all-miss assumption. Using a simple cache-reuse model, our software pipelining optimization achieved 10% improved execution performance over assuming all-cache-hits and used 18% fewer registers than required by an all-cache-miss assumption. We conclude that software pipelining for architectures with non-blocking cache should incorprate a memory-reuse model.
Publication Title
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Recommended Citation
Ding, C.,
Carr, S.,
&
Sweany, P. H.
(1997).
Modulo scheduling with cache reuse information.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),
1300 LNCS, 1079-1083.
http://doi.org/10.1007/bfb0002856
Retrieved from: https://digitalcommons.mtu.edu/michigantech-p/4617