Lattice priority scheduling

Low-overhead timing-channel protection for a shared memory controller

Andrew Ferraiuolo, Yao Wang, Danfeng Zhang, Andrew C. Myers, G. Edward Suh

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Citations (Scopus)

Abstract

Computer hardware is increasingly shared by distrusting parties in platforms such as commercial clouds and web servers. Though hardware sharing is critical for performance and efficiency, this sharing creates timing-channel vulnerabilities in hardware components such as memory controllers and shared memory. Past work on timing-channel protection for memory controllers assumes all parties are mutually distrusting and require timing-channel protection. This assumption limits the capability of the memory controller to allocate resources effectively, and causes severe performance penalties. Further, the assumption that all entities are mutually distrusting is often a poor fit for the security needs of real systems. Often, some entities do not require timing-channel protection or trust others with information. We propose lattice priority scheduling (LPS), a secure memory scheduling algorithm that improves performance by more precisely meeting the target system's security requirements, expressed as a lattice policy. We evaluate LPS in a simulated 8-core microprocessor. Compared to prior solutions [34], lattice priority scheduling improves system throughput by over 30% on average and by up to 84% for some workloads.

Original languageEnglish (US)
Title of host publicationProceedings of the 2016 IEEE International Symposium on High-Performance Computer Architecture, HPCA 2016
PublisherIEEE Computer Society
Pages382-393
Number of pages12
Volume2016-April
ISBN (Electronic)9781467392112
DOIs
StatePublished - Apr 1 2016
Event22nd IEEE International Symposium on High Performance Computer Architecture, HPCA 2016 - Barcelona, Spain
Duration: Mar 12 2016Mar 16 2016

Other

Other22nd IEEE International Symposium on High Performance Computer Architecture, HPCA 2016
CountrySpain
CityBarcelona
Period3/12/163/16/16

Fingerprint

Scheduling
Data storage equipment
Controllers
Computer hardware
Scheduling algorithms
Security systems
Microprocessor chips
Servers
Throughput

All Science Journal Classification (ASJC) codes

  • Hardware and Architecture

Cite this

Ferraiuolo, A., Wang, Y., Zhang, D., Myers, A. C., & Suh, G. E. (2016). Lattice priority scheduling: Low-overhead timing-channel protection for a shared memory controller. In Proceedings of the 2016 IEEE International Symposium on High-Performance Computer Architecture, HPCA 2016 (Vol. 2016-April, pp. 382-393). [7446080] IEEE Computer Society. https://doi.org/10.1109/HPCA.2016.7446080
Ferraiuolo, Andrew ; Wang, Yao ; Zhang, Danfeng ; Myers, Andrew C. ; Suh, G. Edward. / Lattice priority scheduling : Low-overhead timing-channel protection for a shared memory controller. Proceedings of the 2016 IEEE International Symposium on High-Performance Computer Architecture, HPCA 2016. Vol. 2016-April IEEE Computer Society, 2016. pp. 382-393
@inproceedings{038051c058cc4992b3a8d1c0e0c18e87,
title = "Lattice priority scheduling: Low-overhead timing-channel protection for a shared memory controller",
abstract = "Computer hardware is increasingly shared by distrusting parties in platforms such as commercial clouds and web servers. Though hardware sharing is critical for performance and efficiency, this sharing creates timing-channel vulnerabilities in hardware components such as memory controllers and shared memory. Past work on timing-channel protection for memory controllers assumes all parties are mutually distrusting and require timing-channel protection. This assumption limits the capability of the memory controller to allocate resources effectively, and causes severe performance penalties. Further, the assumption that all entities are mutually distrusting is often a poor fit for the security needs of real systems. Often, some entities do not require timing-channel protection or trust others with information. We propose lattice priority scheduling (LPS), a secure memory scheduling algorithm that improves performance by more precisely meeting the target system's security requirements, expressed as a lattice policy. We evaluate LPS in a simulated 8-core microprocessor. Compared to prior solutions [34], lattice priority scheduling improves system throughput by over 30{\%} on average and by up to 84{\%} for some workloads.",
author = "Andrew Ferraiuolo and Yao Wang and Danfeng Zhang and Myers, {Andrew C.} and Suh, {G. Edward}",
year = "2016",
month = "4",
day = "1",
doi = "10.1109/HPCA.2016.7446080",
language = "English (US)",
volume = "2016-April",
pages = "382--393",
booktitle = "Proceedings of the 2016 IEEE International Symposium on High-Performance Computer Architecture, HPCA 2016",
publisher = "IEEE Computer Society",
address = "United States",

}

Ferraiuolo, A, Wang, Y, Zhang, D, Myers, AC & Suh, GE 2016, Lattice priority scheduling: Low-overhead timing-channel protection for a shared memory controller. in Proceedings of the 2016 IEEE International Symposium on High-Performance Computer Architecture, HPCA 2016. vol. 2016-April, 7446080, IEEE Computer Society, pp. 382-393, 22nd IEEE International Symposium on High Performance Computer Architecture, HPCA 2016, Barcelona, Spain, 3/12/16. https://doi.org/10.1109/HPCA.2016.7446080

Lattice priority scheduling : Low-overhead timing-channel protection for a shared memory controller. / Ferraiuolo, Andrew; Wang, Yao; Zhang, Danfeng; Myers, Andrew C.; Suh, G. Edward.

Proceedings of the 2016 IEEE International Symposium on High-Performance Computer Architecture, HPCA 2016. Vol. 2016-April IEEE Computer Society, 2016. p. 382-393 7446080.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Lattice priority scheduling

T2 - Low-overhead timing-channel protection for a shared memory controller

AU - Ferraiuolo, Andrew

AU - Wang, Yao

AU - Zhang, Danfeng

AU - Myers, Andrew C.

AU - Suh, G. Edward

PY - 2016/4/1

Y1 - 2016/4/1

N2 - Computer hardware is increasingly shared by distrusting parties in platforms such as commercial clouds and web servers. Though hardware sharing is critical for performance and efficiency, this sharing creates timing-channel vulnerabilities in hardware components such as memory controllers and shared memory. Past work on timing-channel protection for memory controllers assumes all parties are mutually distrusting and require timing-channel protection. This assumption limits the capability of the memory controller to allocate resources effectively, and causes severe performance penalties. Further, the assumption that all entities are mutually distrusting is often a poor fit for the security needs of real systems. Often, some entities do not require timing-channel protection or trust others with information. We propose lattice priority scheduling (LPS), a secure memory scheduling algorithm that improves performance by more precisely meeting the target system's security requirements, expressed as a lattice policy. We evaluate LPS in a simulated 8-core microprocessor. Compared to prior solutions [34], lattice priority scheduling improves system throughput by over 30% on average and by up to 84% for some workloads.

AB - Computer hardware is increasingly shared by distrusting parties in platforms such as commercial clouds and web servers. Though hardware sharing is critical for performance and efficiency, this sharing creates timing-channel vulnerabilities in hardware components such as memory controllers and shared memory. Past work on timing-channel protection for memory controllers assumes all parties are mutually distrusting and require timing-channel protection. This assumption limits the capability of the memory controller to allocate resources effectively, and causes severe performance penalties. Further, the assumption that all entities are mutually distrusting is often a poor fit for the security needs of real systems. Often, some entities do not require timing-channel protection or trust others with information. We propose lattice priority scheduling (LPS), a secure memory scheduling algorithm that improves performance by more precisely meeting the target system's security requirements, expressed as a lattice policy. We evaluate LPS in a simulated 8-core microprocessor. Compared to prior solutions [34], lattice priority scheduling improves system throughput by over 30% on average and by up to 84% for some workloads.

UR - http://www.scopus.com/inward/record.url?scp=84965032107&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84965032107&partnerID=8YFLogxK

U2 - 10.1109/HPCA.2016.7446080

DO - 10.1109/HPCA.2016.7446080

M3 - Conference contribution

VL - 2016-April

SP - 382

EP - 393

BT - Proceedings of the 2016 IEEE International Symposium on High-Performance Computer Architecture, HPCA 2016

PB - IEEE Computer Society

ER -

Ferraiuolo A, Wang Y, Zhang D, Myers AC, Suh GE. Lattice priority scheduling: Low-overhead timing-channel protection for a shared memory controller. In Proceedings of the 2016 IEEE International Symposium on High-Performance Computer Architecture, HPCA 2016. Vol. 2016-April. IEEE Computer Society. 2016. p. 382-393. 7446080 https://doi.org/10.1109/HPCA.2016.7446080