Selective caching: Avoiding performance valleys in massively parallel architectures

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Emerging general purpose graphics processing units (GPGPU) make use of a memory hierarchy very similar to that of modern multi-core processors - they typically have multiple levels of on-chip caches and a DDR-like off-chip main memory. In such massively parallel architectures, caches are expected to reduce the average data access latency by reducing the number of off-chip memory accesses; however, our extensive experimental studies confirm that not all applications utilize the on-chip caches in an efficient manner. Even though GPGPUs are adopted to run a wide range of general purpose applications, the conventional cache management policies are incapable of achieving the optimal performance over different memory characteristics of the applications. This paper first investigates the underlying reasons for inefficiency of common cache management policies in GPGPUs. To address and resolve those issues, we then propose (i) a characterization mechanism to analyze each kernel at runtime and, (ii) a selective caching policy to manage the flow of cache accesses. Evaluation results of the studied platform show that our proposed dynamically reconfigurable cache hierarchy improves the system performance by up to 105% (average of 27%) over a wide range of modern GPGPU applications, which is within 10% of the optimal improvement.

Original languageEnglish (US)
Title of host publicationProceedings - 2020 28th Euromicro International Conference on Parallel, Distributed and Network-Based Processing, PDP 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages290-298
Number of pages9
ISBN (Electronic)9781728165820
DOIs
StatePublished - Mar 2020
Event28th Euromicro International Conference on Parallel, Distributed and Network-Based Processing, PDP 2020 - Vasteras, Sweden
Duration: Mar 11 2020Mar 13 2020

Publication series

NameProceedings - 2020 28th Euromicro International Conference on Parallel, Distributed and Network-Based Processing, PDP 2020

Conference

Conference28th Euromicro International Conference on Parallel, Distributed and Network-Based Processing, PDP 2020
CountrySweden
CityVasteras
Period3/11/203/13/20

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Computer Networks and Communications
  • Information Systems and Management
  • Computational Mathematics
  • Control and Optimization
  • Health Informatics

Fingerprint Dive into the research topics of 'Selective caching: Avoiding performance valleys in massively parallel architectures'. Together they form a unique fingerprint.

  • Cite this

    Jadidi, A., Kandemir, M. T., & Das, C. R. (2020). Selective caching: Avoiding performance valleys in massively parallel architectures. In Proceedings - 2020 28th Euromicro International Conference on Parallel, Distributed and Network-Based Processing, PDP 2020 (pp. 290-298). [9092211] (Proceedings - 2020 28th Euromicro International Conference on Parallel, Distributed and Network-Based Processing, PDP 2020). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/PDP50117.2020.00051