TY - JOUR

T1 - Distributed linearized alternating direction method of multipliers for composite convex consensus optimization

AU - Aybat, N. S.

AU - Wang, Z.

AU - Lin, T.

AU - Ma, S.

N1 - Funding Information:
Manuscript received March 4, 2016; revised January 2, 2017; accepted May 7, 2017. Date of publication June 7, 2017; date of current version December 27, 2017. The work of N. S. Aybat was supported in part by NSF under Grant CMMI-1400217 and Grant CMMI-1635106, and ARO grant W911NF-17-1-0298 and the work of S. Ma was supported in part by a startup package from the Department of Mathematics at UC Davis. Recommended by Associate Editor M. K. Camlibel. (Corresponding author: Necdet Serhat Aybat.) N. S. Aybat and Z. Wang are with the Industrial and Manufacturing Engineering Department, The Pennsylvania State University, University Park, PA 16802 USA (e-mail: nsa10@psu.edu; zxw121@psu.edu).
Funding Information:
The work of N. S. Aybat was supported in part by NSF under Grant CMMI-1400217 and Grant CMMI-1635106, and ARO grant W911NF-17-1-0298 and the work of S. Ma was supported in part by a startup package from the Department of Mathematics at UC Davis.
Publisher Copyright:
© 2017 IEEE.

PY - 2018/1

Y1 - 2018/1

N2 - Given an undirected graph G = (N, ϵ) of agents N = {1, ⋯, N} connected with edges in ϵ, we study how to compute an optimal decision on which there is consensus among agents and that minimizes the sum of agent-specific private convex composite functions {Φi}i∈N, where Φi ≙ ξi + fi belongs to agent-i. Assuming only agents connected by an edge can communicate, we propose a distributed proximal gradient algorithm (DPGA) for consensus optimization over both unweighted and weighted static (undirected) communication networks. In one iteration, each agent-i computes the prox map of ξi and gradient of fi, and this is followed by local communication with neighboring agents. We also study its stochastic gradient variant, SDPGA, which can only access to noisy estimates of ∇fi at each agent-i. This computational model abstracts a number of applications in distributed sensing, machine learning and statistical inference. We show ergodic convergence in both suboptimality error and consensus violation for the DPGA and SDPGA with rates O(1/t) and O(1/√t), respectively.

AB - Given an undirected graph G = (N, ϵ) of agents N = {1, ⋯, N} connected with edges in ϵ, we study how to compute an optimal decision on which there is consensus among agents and that minimizes the sum of agent-specific private convex composite functions {Φi}i∈N, where Φi ≙ ξi + fi belongs to agent-i. Assuming only agents connected by an edge can communicate, we propose a distributed proximal gradient algorithm (DPGA) for consensus optimization over both unweighted and weighted static (undirected) communication networks. In one iteration, each agent-i computes the prox map of ξi and gradient of fi, and this is followed by local communication with neighboring agents. We also study its stochastic gradient variant, SDPGA, which can only access to noisy estimates of ∇fi at each agent-i. This computational model abstracts a number of applications in distributed sensing, machine learning and statistical inference. We show ergodic convergence in both suboptimality error and consensus violation for the DPGA and SDPGA with rates O(1/t) and O(1/√t), respectively.

UR - http://www.scopus.com/inward/record.url?scp=85042650566&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85042650566&partnerID=8YFLogxK

U2 - 10.1109/TAC.2017.2713046

DO - 10.1109/TAC.2017.2713046

M3 - Article

AN - SCOPUS:85042650566

VL - 63

SP - 5

EP - 20

JO - IRE Transactions on Automatic Control

JF - IRE Transactions on Automatic Control

SN - 0018-9286

IS - 1

ER -