TY - JOUR
T1 - Adaptive Federated Learning in Resource Constrained Edge Computing Systems
AU - Wang, Shiqiang
AU - Tuor, Tiffany
AU - Salonidis, Theodoros
AU - Leung, Kin K.
AU - Makaya, Christian
AU - He, Ting
AU - Chan, Kevin
N1 - Funding Information:
Manuscript received July 21, 2018; revised December 13, 2018; accepted February 12, 2019. Date of publication March 11, 2019; date of current version May 15, 2019. This work was supported in part by the U.S. Army Research Laboratory and the U.K. Ministry of Defence under Agreement W911NF-16-3-0001. A preliminary version of this paper which has the title “When edge meets learning: adaptive control for resource-constrained distributed machine learning” was presented at IEEE INFOCOM 2018 [1]. (Corresponding author: Shiqiang Wang.) S. Wang, T. Salonidis, and C. Makaya are with the IBM Thomas J. Watson Research Center, Yorktown Heights, NY 10598 USA (e-mail: wangshiq@us.ibm.com; tsaloni@us.ibm.com; chrismak@ieee.org).
PY - 2019/6
Y1 - 2019/6
N2 - Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.
AB - Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.
UR - http://www.scopus.com/inward/record.url?scp=85065907659&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85065907659&partnerID=8YFLogxK
U2 - 10.1109/JSAC.2019.2904348
DO - 10.1109/JSAC.2019.2904348
M3 - Article
AN - SCOPUS:85065907659
VL - 37
SP - 1205
EP - 1221
JO - IEEE Journal on Selected Areas in Communications
JF - IEEE Journal on Selected Areas in Communications
SN - 0733-8716
IS - 6
M1 - 8664630
ER -