APP

Minimizing interference using aggressive pipelined prefetching in multi-level buffer caches

Christina M. Patrick, Nicholas Voshell, Mahmut Kandemir

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

As services become more complex with multiple interactions, and storage servers are shared by multiple services, the different I/O streams arising from these multiple services compete for disk attention. Aggressive Pipelined Prefetching (APP) enabled storage clients are designed to manage the buffer cache and I/O streams to minimize the disk I/O-interference arising from competing streams. Due to the large number of streams serviced by a storage server, most of the disk time is spent seeking, leading to degradation in response times. The goal of APP is to decrease application execution time by increasing the throughput of individual I/O streams and utilizing idle capacity on remote nodes along with idle network times thus effectively avoiding alternating bursts of activity followed by periods of inactivity. APP significantly increases overall I/O throughput and decreases overall messaging overhead between servers. In APP, the intelligence is embedded in the clients and they automatically infer parameters in order to achieve the maximum throughput. APP clients make use of aggressive prefetching and data offloading to remote buffer caches in multi-level buffer cache hierarchies in an effort to minimize disk interference and tranquilize the effects of aggressive prefetching. We used an extremely I/O-intensive Radix-k application employed in studies on the scalability of parallel image composition and particle tracing developed at the Argonne National Laboratory with data sets of up to 128GB and implemented our scheme on a 16-node Linux cluster. We observed that the execution time of the application decreased by 68% on average when using our scheme.

Original languageEnglish (US)
Title of host publicationProceedings - 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011
Pages254-264
Number of pages11
DOIs
StatePublished - Aug 10 2011
Event11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011 - Newport Beach, CA, United States
Duration: May 23 2011May 26 2011

Publication series

NameProceedings - 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011

Other

Other11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011
CountryUnited States
CityNewport Beach, CA
Period5/23/115/26/11

Fingerprint

Servers
Throughput
Scalability
Degradation
Chemical analysis
Linux

All Science Journal Classification (ASJC) codes

  • Computational Theory and Mathematics
  • Software

Cite this

Patrick, C. M., Voshell, N., & Kandemir, M. (2011). APP: Minimizing interference using aggressive pipelined prefetching in multi-level buffer caches. In Proceedings - 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011 (pp. 254-264). [5948616] (Proceedings - 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011). https://doi.org/10.1109/CCGrid.2011.47
Patrick, Christina M. ; Voshell, Nicholas ; Kandemir, Mahmut. / APP : Minimizing interference using aggressive pipelined prefetching in multi-level buffer caches. Proceedings - 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011. 2011. pp. 254-264 (Proceedings - 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011).
@inproceedings{7537a7ec65904477acf27cdc1b625974,
title = "APP: Minimizing interference using aggressive pipelined prefetching in multi-level buffer caches",
abstract = "As services become more complex with multiple interactions, and storage servers are shared by multiple services, the different I/O streams arising from these multiple services compete for disk attention. Aggressive Pipelined Prefetching (APP) enabled storage clients are designed to manage the buffer cache and I/O streams to minimize the disk I/O-interference arising from competing streams. Due to the large number of streams serviced by a storage server, most of the disk time is spent seeking, leading to degradation in response times. The goal of APP is to decrease application execution time by increasing the throughput of individual I/O streams and utilizing idle capacity on remote nodes along with idle network times thus effectively avoiding alternating bursts of activity followed by periods of inactivity. APP significantly increases overall I/O throughput and decreases overall messaging overhead between servers. In APP, the intelligence is embedded in the clients and they automatically infer parameters in order to achieve the maximum throughput. APP clients make use of aggressive prefetching and data offloading to remote buffer caches in multi-level buffer cache hierarchies in an effort to minimize disk interference and tranquilize the effects of aggressive prefetching. We used an extremely I/O-intensive Radix-k application employed in studies on the scalability of parallel image composition and particle tracing developed at the Argonne National Laboratory with data sets of up to 128GB and implemented our scheme on a 16-node Linux cluster. We observed that the execution time of the application decreased by 68{\%} on average when using our scheme.",
author = "Patrick, {Christina M.} and Nicholas Voshell and Mahmut Kandemir",
year = "2011",
month = "8",
day = "10",
doi = "10.1109/CCGrid.2011.47",
language = "English (US)",
isbn = "9780769543956",
series = "Proceedings - 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011",
pages = "254--264",
booktitle = "Proceedings - 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011",

}

Patrick, CM, Voshell, N & Kandemir, M 2011, APP: Minimizing interference using aggressive pipelined prefetching in multi-level buffer caches. in Proceedings - 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011., 5948616, Proceedings - 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011, pp. 254-264, 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011, Newport Beach, CA, United States, 5/23/11. https://doi.org/10.1109/CCGrid.2011.47

APP : Minimizing interference using aggressive pipelined prefetching in multi-level buffer caches. / Patrick, Christina M.; Voshell, Nicholas; Kandemir, Mahmut.

Proceedings - 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011. 2011. p. 254-264 5948616 (Proceedings - 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - APP

T2 - Minimizing interference using aggressive pipelined prefetching in multi-level buffer caches

AU - Patrick, Christina M.

AU - Voshell, Nicholas

AU - Kandemir, Mahmut

PY - 2011/8/10

Y1 - 2011/8/10

N2 - As services become more complex with multiple interactions, and storage servers are shared by multiple services, the different I/O streams arising from these multiple services compete for disk attention. Aggressive Pipelined Prefetching (APP) enabled storage clients are designed to manage the buffer cache and I/O streams to minimize the disk I/O-interference arising from competing streams. Due to the large number of streams serviced by a storage server, most of the disk time is spent seeking, leading to degradation in response times. The goal of APP is to decrease application execution time by increasing the throughput of individual I/O streams and utilizing idle capacity on remote nodes along with idle network times thus effectively avoiding alternating bursts of activity followed by periods of inactivity. APP significantly increases overall I/O throughput and decreases overall messaging overhead between servers. In APP, the intelligence is embedded in the clients and they automatically infer parameters in order to achieve the maximum throughput. APP clients make use of aggressive prefetching and data offloading to remote buffer caches in multi-level buffer cache hierarchies in an effort to minimize disk interference and tranquilize the effects of aggressive prefetching. We used an extremely I/O-intensive Radix-k application employed in studies on the scalability of parallel image composition and particle tracing developed at the Argonne National Laboratory with data sets of up to 128GB and implemented our scheme on a 16-node Linux cluster. We observed that the execution time of the application decreased by 68% on average when using our scheme.

AB - As services become more complex with multiple interactions, and storage servers are shared by multiple services, the different I/O streams arising from these multiple services compete for disk attention. Aggressive Pipelined Prefetching (APP) enabled storage clients are designed to manage the buffer cache and I/O streams to minimize the disk I/O-interference arising from competing streams. Due to the large number of streams serviced by a storage server, most of the disk time is spent seeking, leading to degradation in response times. The goal of APP is to decrease application execution time by increasing the throughput of individual I/O streams and utilizing idle capacity on remote nodes along with idle network times thus effectively avoiding alternating bursts of activity followed by periods of inactivity. APP significantly increases overall I/O throughput and decreases overall messaging overhead between servers. In APP, the intelligence is embedded in the clients and they automatically infer parameters in order to achieve the maximum throughput. APP clients make use of aggressive prefetching and data offloading to remote buffer caches in multi-level buffer cache hierarchies in an effort to minimize disk interference and tranquilize the effects of aggressive prefetching. We used an extremely I/O-intensive Radix-k application employed in studies on the scalability of parallel image composition and particle tracing developed at the Argonne National Laboratory with data sets of up to 128GB and implemented our scheme on a 16-node Linux cluster. We observed that the execution time of the application decreased by 68% on average when using our scheme.

UR - http://www.scopus.com/inward/record.url?scp=79961154111&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=79961154111&partnerID=8YFLogxK

U2 - 10.1109/CCGrid.2011.47

DO - 10.1109/CCGrid.2011.47

M3 - Conference contribution

SN - 9780769543956

T3 - Proceedings - 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011

SP - 254

EP - 264

BT - Proceedings - 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011

ER -

Patrick CM, Voshell N, Kandemir M. APP: Minimizing interference using aggressive pipelined prefetching in multi-level buffer caches. In Proceedings - 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011. 2011. p. 254-264. 5948616. (Proceedings - 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2011). https://doi.org/10.1109/CCGrid.2011.47