### Abstract

Given an undirected graph G = (N, ϵ) of agents N = {1, ⋯, N} connected with edges in ϵ, we study how to compute an optimal decision on which there is consensus among agents and that minimizes the sum of agent-specific private convex composite functions {Φ_{i}}_{i∈N}, where Φ_{i} ≙ ξ_{i} + f_{i} belongs to agent-i. Assuming only agents connected by an edge can communicate, we propose a distributed proximal gradient algorithm (DPGA) for consensus optimization over both unweighted and weighted static (undirected) communication networks. In one iteration, each agent-i computes the prox map of ξ_{i} and gradient of f_{i}, and this is followed by local communication with neighboring agents. We also study its stochastic gradient variant, SDPGA, which can only access to noisy estimates of ∇f_{i} at each agent-i. This computational model abstracts a number of applications in distributed sensing, machine learning and statistical inference. We show ergodic convergence in both suboptimality error and consensus violation for the DPGA and SDPGA with rates O(1/t) and O(1/√t), respectively.

Original language | English (US) |
---|---|

Pages (from-to) | 5-20 |

Number of pages | 16 |

Journal | IEEE Transactions on Automatic Control |

Volume | 63 |

Issue number | 1 |

DOIs | |

State | Published - Jan 1 2018 |

### Fingerprint

### All Science Journal Classification (ASJC) codes

- Control and Systems Engineering
- Computer Science Applications
- Electrical and Electronic Engineering

### Cite this

*IEEE Transactions on Automatic Control*,

*63*(1), 5-20. https://doi.org/10.1109/TAC.2017.2713046

}

*IEEE Transactions on Automatic Control*, vol. 63, no. 1, pp. 5-20. https://doi.org/10.1109/TAC.2017.2713046

**Distributed linearized alternating direction method of multipliers for composite convex consensus optimization.** / Aybat, Necdet S.; Wang, Z.; Lin, T.; Ma, S.

Research output: Contribution to journal › Article

TY - JOUR

T1 - Distributed linearized alternating direction method of multipliers for composite convex consensus optimization

AU - Aybat, Necdet S.

AU - Wang, Z.

AU - Lin, T.

AU - Ma, S.

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Given an undirected graph G = (N, ϵ) of agents N = {1, ⋯, N} connected with edges in ϵ, we study how to compute an optimal decision on which there is consensus among agents and that minimizes the sum of agent-specific private convex composite functions {Φi}i∈N, where Φi ≙ ξi + fi belongs to agent-i. Assuming only agents connected by an edge can communicate, we propose a distributed proximal gradient algorithm (DPGA) for consensus optimization over both unweighted and weighted static (undirected) communication networks. In one iteration, each agent-i computes the prox map of ξi and gradient of fi, and this is followed by local communication with neighboring agents. We also study its stochastic gradient variant, SDPGA, which can only access to noisy estimates of ∇fi at each agent-i. This computational model abstracts a number of applications in distributed sensing, machine learning and statistical inference. We show ergodic convergence in both suboptimality error and consensus violation for the DPGA and SDPGA with rates O(1/t) and O(1/√t), respectively.

AB - Given an undirected graph G = (N, ϵ) of agents N = {1, ⋯, N} connected with edges in ϵ, we study how to compute an optimal decision on which there is consensus among agents and that minimizes the sum of agent-specific private convex composite functions {Φi}i∈N, where Φi ≙ ξi + fi belongs to agent-i. Assuming only agents connected by an edge can communicate, we propose a distributed proximal gradient algorithm (DPGA) for consensus optimization over both unweighted and weighted static (undirected) communication networks. In one iteration, each agent-i computes the prox map of ξi and gradient of fi, and this is followed by local communication with neighboring agents. We also study its stochastic gradient variant, SDPGA, which can only access to noisy estimates of ∇fi at each agent-i. This computational model abstracts a number of applications in distributed sensing, machine learning and statistical inference. We show ergodic convergence in both suboptimality error and consensus violation for the DPGA and SDPGA with rates O(1/t) and O(1/√t), respectively.

UR - http://www.scopus.com/inward/record.url?scp=85042650566&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85042650566&partnerID=8YFLogxK

U2 - 10.1109/TAC.2017.2713046

DO - 10.1109/TAC.2017.2713046

M3 - Article

AN - SCOPUS:85042650566

VL - 63

SP - 5

EP - 20

JO - IEEE Transactions on Automatic Control

JF - IEEE Transactions on Automatic Control

SN - 0018-9286

IS - 1

ER -