When Edge Meets Learning: Adaptive Control for Resource-Constrained Distributed Machine Learning

Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K. Leung, Christian Makaya, Ting He, Kevin Chan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

28 Citations (Scopus)

Abstract

Emerging technologies and applications including Internet of Things (IoT), social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent based approaches. We analyze the convergence rate of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best trade-off between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

Original languageEnglish (US)
Title of host publicationINFOCOM 2018 - IEEE Conference on Computer Communications
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages63-71
Number of pages9
ISBN (Electronic)9781538641286
DOIs
StatePublished - Oct 8 2018
Event2018 IEEE Conference on Computer Communications, INFOCOM 2018 - Honolulu, United States
Duration: Apr 15 2018Apr 19 2018

Publication series

NameProceedings - IEEE INFOCOM
Volume2018-April
ISSN (Print)0743-166X

Other

Other2018 IEEE Conference on Computer Communications, INFOCOM 2018
CountryUnited States
CityHonolulu
Period4/15/184/19/18

Fingerprint

Learning systems
Agglomeration
Bandwidth
Experiments

All Science Journal Classification (ASJC) codes

  • Computer Science(all)
  • Electrical and Electronic Engineering

Cite this

Wang, S., Tuor, T., Salonidis, T., Leung, K. K., Makaya, C., He, T., & Chan, K. (2018). When Edge Meets Learning: Adaptive Control for Resource-Constrained Distributed Machine Learning. In INFOCOM 2018 - IEEE Conference on Computer Communications (pp. 63-71). [8486403] (Proceedings - IEEE INFOCOM; Vol. 2018-April). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/INFOCOM.2018.8486403
Wang, Shiqiang ; Tuor, Tiffany ; Salonidis, Theodoros ; Leung, Kin K. ; Makaya, Christian ; He, Ting ; Chan, Kevin. / When Edge Meets Learning : Adaptive Control for Resource-Constrained Distributed Machine Learning. INFOCOM 2018 - IEEE Conference on Computer Communications. Institute of Electrical and Electronics Engineers Inc., 2018. pp. 63-71 (Proceedings - IEEE INFOCOM).
@inproceedings{6c6045104a564f21ba7924bdcc76b120,
title = "When Edge Meets Learning: Adaptive Control for Resource-Constrained Distributed Machine Learning",
abstract = "Emerging technologies and applications including Internet of Things (IoT), social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent based approaches. We analyze the convergence rate of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best trade-off between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.",
author = "Shiqiang Wang and Tiffany Tuor and Theodoros Salonidis and Leung, {Kin K.} and Christian Makaya and Ting He and Kevin Chan",
year = "2018",
month = "10",
day = "8",
doi = "10.1109/INFOCOM.2018.8486403",
language = "English (US)",
series = "Proceedings - IEEE INFOCOM",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "63--71",
booktitle = "INFOCOM 2018 - IEEE Conference on Computer Communications",
address = "United States",

}

Wang, S, Tuor, T, Salonidis, T, Leung, KK, Makaya, C, He, T & Chan, K 2018, When Edge Meets Learning: Adaptive Control for Resource-Constrained Distributed Machine Learning. in INFOCOM 2018 - IEEE Conference on Computer Communications., 8486403, Proceedings - IEEE INFOCOM, vol. 2018-April, Institute of Electrical and Electronics Engineers Inc., pp. 63-71, 2018 IEEE Conference on Computer Communications, INFOCOM 2018, Honolulu, United States, 4/15/18. https://doi.org/10.1109/INFOCOM.2018.8486403

When Edge Meets Learning : Adaptive Control for Resource-Constrained Distributed Machine Learning. / Wang, Shiqiang; Tuor, Tiffany; Salonidis, Theodoros; Leung, Kin K.; Makaya, Christian; He, Ting; Chan, Kevin.

INFOCOM 2018 - IEEE Conference on Computer Communications. Institute of Electrical and Electronics Engineers Inc., 2018. p. 63-71 8486403 (Proceedings - IEEE INFOCOM; Vol. 2018-April).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - When Edge Meets Learning

T2 - Adaptive Control for Resource-Constrained Distributed Machine Learning

AU - Wang, Shiqiang

AU - Tuor, Tiffany

AU - Salonidis, Theodoros

AU - Leung, Kin K.

AU - Makaya, Christian

AU - He, Ting

AU - Chan, Kevin

PY - 2018/10/8

Y1 - 2018/10/8

N2 - Emerging technologies and applications including Internet of Things (IoT), social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent based approaches. We analyze the convergence rate of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best trade-off between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

AB - Emerging technologies and applications including Internet of Things (IoT), social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent based approaches. We analyze the convergence rate of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best trade-off between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.

UR - http://www.scopus.com/inward/record.url?scp=85050668325&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85050668325&partnerID=8YFLogxK

U2 - 10.1109/INFOCOM.2018.8486403

DO - 10.1109/INFOCOM.2018.8486403

M3 - Conference contribution

AN - SCOPUS:85050668325

T3 - Proceedings - IEEE INFOCOM

SP - 63

EP - 71

BT - INFOCOM 2018 - IEEE Conference on Computer Communications

PB - Institute of Electrical and Electronics Engineers Inc.

ER -

Wang S, Tuor T, Salonidis T, Leung KK, Makaya C, He T et al. When Edge Meets Learning: Adaptive Control for Resource-Constrained Distributed Machine Learning. In INFOCOM 2018 - IEEE Conference on Computer Communications. Institute of Electrical and Electronics Engineers Inc. 2018. p. 63-71. 8486403. (Proceedings - IEEE INFOCOM). https://doi.org/10.1109/INFOCOM.2018.8486403