Efficient processor management schemes for mesh-connected multicomputers

Byung S. Yoo, Chitaranjan Das

Research output: Contribution to journalArticle

3 Citations (Scopus)

Abstract

This paper investigates various processor management techniques for improving the performance of mesh-connected multicomputers. Unlike almost all prior work where the focus was on improving the submesh recognition ability of the processor allocation algorithms, this research examines other alternatives to improve system performance beyond what is achievable with usually assumed first come first served (FCFS) scheduling and any allocation. First, we use the smallest job first (SJF) policy to improve the spatial parallelism in a mesh. Next, we introduce a generic processor management scheme called multitasking and multiprogramming (M2). Then, an M2 policy for mesh-connected multicomputers called virtual mesh (VM) is proposed and analyzed. The proposed VM scheme allows multiprogramming of jobs on several VMs. Finally, a novel approach called limit allocation is used for job allocation. With this scheme, a job (submesh) size is reduced if the job cannot be allocated. The objective here is to reduce the job waiting time and hence improve the overall performance. While all of the three approaches are viable alternatives to reduce the average job response time under various workloads, the VM and the limit allocation techniques are especially attractive for providing some additional features. The VM scheme brings in the concept of time-sharing execution for better efficiency and limit allocation shows how job size restriction can be beneficial for performance and fault-tolerance in a mesh topology. Moreover, the limit allocation scheme using even the simplest allocation policy can outperform any other approach.

Original languageEnglish (US)
Pages (from-to)1057-1078
Number of pages22
JournalParallel Computing
Volume27
Issue number8
DOIs
StatePublished - Jul 1 2001

Fingerprint

Multiprogramming
Multicomputers
Mesh
Multitasking
Fault tolerance
Program processors
Scheduling
Topology
Alternatives
Fault Tolerance
Waiting Time
Response Time
Parallelism
Workload
System Performance
Sharing
Restriction

All Science Journal Classification (ASJC) codes

  • Software
  • Theoretical Computer Science
  • Hardware and Architecture
  • Computer Networks and Communications
  • Computer Graphics and Computer-Aided Design
  • Artificial Intelligence

Cite this

@article{6ca40824702b472b96426f82b6f3389f,
title = "Efficient processor management schemes for mesh-connected multicomputers",
abstract = "This paper investigates various processor management techniques for improving the performance of mesh-connected multicomputers. Unlike almost all prior work where the focus was on improving the submesh recognition ability of the processor allocation algorithms, this research examines other alternatives to improve system performance beyond what is achievable with usually assumed first come first served (FCFS) scheduling and any allocation. First, we use the smallest job first (SJF) policy to improve the spatial parallelism in a mesh. Next, we introduce a generic processor management scheme called multitasking and multiprogramming (M2). Then, an M2 policy for mesh-connected multicomputers called virtual mesh (VM) is proposed and analyzed. The proposed VM scheme allows multiprogramming of jobs on several VMs. Finally, a novel approach called limit allocation is used for job allocation. With this scheme, a job (submesh) size is reduced if the job cannot be allocated. The objective here is to reduce the job waiting time and hence improve the overall performance. While all of the three approaches are viable alternatives to reduce the average job response time under various workloads, the VM and the limit allocation techniques are especially attractive for providing some additional features. The VM scheme brings in the concept of time-sharing execution for better efficiency and limit allocation shows how job size restriction can be beneficial for performance and fault-tolerance in a mesh topology. Moreover, the limit allocation scheme using even the simplest allocation policy can outperform any other approach.",
author = "Yoo, {Byung S.} and Chitaranjan Das",
year = "2001",
month = "7",
day = "1",
doi = "10.1016/S0167-8191(01)00078-3",
language = "English (US)",
volume = "27",
pages = "1057--1078",
journal = "Parallel Computing",
issn = "0167-8191",
publisher = "Elsevier",
number = "8",

}

Efficient processor management schemes for mesh-connected multicomputers. / Yoo, Byung S.; Das, Chitaranjan.

In: Parallel Computing, Vol. 27, No. 8, 01.07.2001, p. 1057-1078.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Efficient processor management schemes for mesh-connected multicomputers

AU - Yoo, Byung S.

AU - Das, Chitaranjan

PY - 2001/7/1

Y1 - 2001/7/1

N2 - This paper investigates various processor management techniques for improving the performance of mesh-connected multicomputers. Unlike almost all prior work where the focus was on improving the submesh recognition ability of the processor allocation algorithms, this research examines other alternatives to improve system performance beyond what is achievable with usually assumed first come first served (FCFS) scheduling and any allocation. First, we use the smallest job first (SJF) policy to improve the spatial parallelism in a mesh. Next, we introduce a generic processor management scheme called multitasking and multiprogramming (M2). Then, an M2 policy for mesh-connected multicomputers called virtual mesh (VM) is proposed and analyzed. The proposed VM scheme allows multiprogramming of jobs on several VMs. Finally, a novel approach called limit allocation is used for job allocation. With this scheme, a job (submesh) size is reduced if the job cannot be allocated. The objective here is to reduce the job waiting time and hence improve the overall performance. While all of the three approaches are viable alternatives to reduce the average job response time under various workloads, the VM and the limit allocation techniques are especially attractive for providing some additional features. The VM scheme brings in the concept of time-sharing execution for better efficiency and limit allocation shows how job size restriction can be beneficial for performance and fault-tolerance in a mesh topology. Moreover, the limit allocation scheme using even the simplest allocation policy can outperform any other approach.

AB - This paper investigates various processor management techniques for improving the performance of mesh-connected multicomputers. Unlike almost all prior work where the focus was on improving the submesh recognition ability of the processor allocation algorithms, this research examines other alternatives to improve system performance beyond what is achievable with usually assumed first come first served (FCFS) scheduling and any allocation. First, we use the smallest job first (SJF) policy to improve the spatial parallelism in a mesh. Next, we introduce a generic processor management scheme called multitasking and multiprogramming (M2). Then, an M2 policy for mesh-connected multicomputers called virtual mesh (VM) is proposed and analyzed. The proposed VM scheme allows multiprogramming of jobs on several VMs. Finally, a novel approach called limit allocation is used for job allocation. With this scheme, a job (submesh) size is reduced if the job cannot be allocated. The objective here is to reduce the job waiting time and hence improve the overall performance. While all of the three approaches are viable alternatives to reduce the average job response time under various workloads, the VM and the limit allocation techniques are especially attractive for providing some additional features. The VM scheme brings in the concept of time-sharing execution for better efficiency and limit allocation shows how job size restriction can be beneficial for performance and fault-tolerance in a mesh topology. Moreover, the limit allocation scheme using even the simplest allocation policy can outperform any other approach.

UR - http://www.scopus.com/inward/record.url?scp=0035400663&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0035400663&partnerID=8YFLogxK

U2 - 10.1016/S0167-8191(01)00078-3

DO - 10.1016/S0167-8191(01)00078-3

M3 - Article

VL - 27

SP - 1057

EP - 1078

JO - Parallel Computing

JF - Parallel Computing

SN - 0167-8191

IS - 8

ER -