Courteous cache sharing: Being nice to others in capacity management

Akbar Sharifi, Shekhar Srikantaiah, Mahmut Kandemir, Mary Jane Irwin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

12 Scopus citations

Abstract

This paper proposes a cache management scheme for multiprogrammed, multithreaded applications, with the objective of obtaining maximum performance for both individual applications and the multithreaded workload mix. In this scheme, each individual application's performance is improved by increasing the priority of its slowest thread, while the overall system performance is improved by ensuring that each individual application's performance benefit does not come at the cost of a significant degradation to other application's threads that are sharing the same cache. Averaged over six workloads, our shared cache management scheme improves the performance of the combination of applications by 18%. These improvements across applications in each mix are also fair, as indicated by average fair speedup improvements of 10% across the threads of each application (averaged over all the workloads).

Original languageEnglish (US)
Title of host publicationProceedings of the 49th Annual Design Automation Conference, DAC '12
Pages678-687
Number of pages10
DOIs
StatePublished - 2012
Event49th Annual Design Automation Conference, DAC '12 - San Francisco, CA, United States
Duration: Jun 3 2012Jun 7 2012

Publication series

NameProceedings - Design Automation Conference
ISSN (Print)0738-100X

Other

Other49th Annual Design Automation Conference, DAC '12
Country/TerritoryUnited States
CitySan Francisco, CA
Period6/3/126/7/12

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Control and Systems Engineering
  • Electrical and Electronic Engineering
  • Modeling and Simulation

Fingerprint

Dive into the research topics of 'Courteous cache sharing: Being nice to others in capacity management'. Together they form a unique fingerprint.

Cite this