Cache memories are known to consume a large percentage of on-chip energy in current microprocessors. Direct-mapped caches are, in general, more energy efficient as they are simpler as compared to set-associative caches, and require no complex line replacement mechanisms. This study goes beyond performance-centric techniques, and proposes an energy-oriented optimization strategy that aims directly at reducing per access energy cost for direct-mapped data caches (rather than as a side effect of a performance-oriented optimization). Specifically, we have developed a compiler algorithm that uses access pattern analysis to determine those memory references that are certain to result in cache hits in a virtually-addressed direct-mapped data cache. After detecting such references, the compiler substitutes the corresponding load operations with energy-efficient loads that access only the data array of the cache instead of both tag and data arrays. This tag access elimination, in turn, reduces the per access energy consumption for data accesses.
|Original language||English (US)|
|Number of pages||1|
|Journal||Proceedings -Design, Automation and Test in Europe, DATE|
|State||Published - Dec 1 2002|
|Event||2002 Design, Automation and Test in Europe Conference and Exhibition, DATE 2002 - Paris, France|
Duration: Mar 4 2002 → Mar 8 2002
All Science Journal Classification (ASJC) codes