Compiler Algorithms for Optimizing Locality and Parallelism on Shared and Distributed-Memory Machines

M. Kandemir, J. Ramanujam, A. Choudhary

Research output: Contribution to journalArticle

6 Scopus citations

Abstract

Distributed-memory message-passing machines deliver scalable performance but are difficult to program. Shared-memory machines, on the other hand, are easier to program but obtaining scalable performance with large number of processors is difficult. Recently, scalable machines based on logically shared physically distributed memory have been designed and implemented. While some of the performance issues like parallelism and locality are common to different parallel architectures, issues such as data distribution are unique to specific architectures. One of the most important challenges compiler writers face is the design of compilation techniques that can work well on a variety of architectures. In this paper, we propose an algorithm that can be employed by optimizing compilers for different types of parallel architectures. Our optimization algorithm does the following: (1) transforms loop nests such that, where possible, the iterations of the outermost loops can be run in parallel across processors; (2) optimizes memory locality by carefully distributing each array across processors; (3) optimizes interprocessor communication using message vectorization whenever possible; and (4) optimizes cache locality by assigning appropriate storage layout for each array and by transforming the iteration space. Depending on the machine architecture, some or all of these steps can be applied in a unified framework. We report empirical results on an SGI Origin 2000 distributed-shared-memory multiprocessor and an IBM SP-2 distributed-memory message-passing machine to validate the effectiveness of our approach.

Original languageEnglish (US)
Pages (from-to)924-965
Number of pages42
JournalJournal of Parallel and Distributed Computing
Volume60
Issue number8
DOIs
StatePublished - Aug 1 2000

All Science Journal Classification (ASJC) codes

  • Software
  • Theoretical Computer Science
  • Hardware and Architecture
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Compiler Algorithms for Optimizing Locality and Parallelism on Shared and Distributed-Memory Machines'. Together they form a unique fingerprint.

  • Cite this