TY - GEN
T1 - Morphable cache architectures
T2 - 2001 ACM SIGPLAN Workshop on Optimization of Middleware and Distributed Systems, OM 2001
AU - Kadayif, I.
AU - Kandemir, M.
AU - Vijaykrishnan, N.
AU - Irwin, M. J.
AU - Ramanujam, J.
N1 - Publisher Copyright:
© ACM 2001.
PY - 2001/8/1
Y1 - 2001/8/1
N2 - Computer architects ha ve tried to mitigate the consequences of high memory latencies using a variety techniques. An example of these techniques is m ulti-lev elcaches to coun-teract the latency that results from having a memory that is slow er than the processor. Recent research has demon-strated that compiler optimizations that modify data lay-outs and restructure computation can be successful in im-proving memory system performance. However, in many cases, w orking with a fixed cache configuration prevents the application/compiler from obtaining the maximum perfor-mance. In addition, prompted by demands in portabilit y, long battery life, and low-cost packaging, the computer in-dustry has started viewing energy and pow er as decisive de-sign factors, along with performance and cost. This makes the job of the compiler/user even more dificult as one needs to strik e a balance betw een low pow er/energy consumption and high performance. Consequently, adapting the code to the underlying cache/memory hierarchy is becoming more and more dificult. In this paper, we tak e an alternate approach and attempt to adapt the cache arc hitecture to the softw are needs. We focus on array-dominated applications and measure the po-tential benefits that could be gained from a morphable (re-configurable) cache architecture. Our results show that not only difierent applications w ork best with difierent cache configurations, but also that difierent loop nests in a given application demand difierent configurations. Our results also indicate that the most suitable cache configuration for a giv en application or a single nest depends strongly on the ob-jectiv e function being optimized. For example, minimizing cache memory energy requires a difierent cache configuration for each nest than an objective which tries to minimize the overall memory system energy. Based on our experiments, w e conclude that fine-grain (loop nest-level) cache configu-ration management is an important step for a solution to the challenging architecture/softw are tradeofus aw aiting system designers in the future.
AB - Computer architects ha ve tried to mitigate the consequences of high memory latencies using a variety techniques. An example of these techniques is m ulti-lev elcaches to coun-teract the latency that results from having a memory that is slow er than the processor. Recent research has demon-strated that compiler optimizations that modify data lay-outs and restructure computation can be successful in im-proving memory system performance. However, in many cases, w orking with a fixed cache configuration prevents the application/compiler from obtaining the maximum perfor-mance. In addition, prompted by demands in portabilit y, long battery life, and low-cost packaging, the computer in-dustry has started viewing energy and pow er as decisive de-sign factors, along with performance and cost. This makes the job of the compiler/user even more dificult as one needs to strik e a balance betw een low pow er/energy consumption and high performance. Consequently, adapting the code to the underlying cache/memory hierarchy is becoming more and more dificult. In this paper, we tak e an alternate approach and attempt to adapt the cache arc hitecture to the softw are needs. We focus on array-dominated applications and measure the po-tential benefits that could be gained from a morphable (re-configurable) cache architecture. Our results show that not only difierent applications w ork best with difierent cache configurations, but also that difierent loop nests in a given application demand difierent configurations. Our results also indicate that the most suitable cache configuration for a giv en application or a single nest depends strongly on the ob-jectiv e function being optimized. For example, minimizing cache memory energy requires a difierent cache configuration for each nest than an objective which tries to minimize the overall memory system energy. Based on our experiments, w e conclude that fine-grain (loop nest-level) cache configu-ration management is an important step for a solution to the challenging architecture/softw are tradeofus aw aiting system designers in the future.
UR - http://www.scopus.com/inward/record.url?scp=85053418229&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85053418229&partnerID=8YFLogxK
U2 - 10.1145/384198.384215
DO - 10.1145/384198.384215
M3 - Conference contribution
AN - SCOPUS:85053418229
SN - 1581134266
SN - 9781581134261
T3 - Proceedings of the 2001 ACM SIGPLAN Workshop on Optimization of Middleware and Distributed Systems, OM 2001
SP - 128
EP - 137
BT - Proceedings of the 2001 ACM SIGPLAN Workshop on Optimization of Middleware and Distributed Systems, OM 2001
PB - Association for Computing Machinery, Inc
Y2 - 18 June 2001
ER -