An asynchronous distributed proximal gradient method for composite convex optimization

Necdet S. Aybat, Z. Wang, G. Iyengar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

11 Citations (Scopus)

Abstract

We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that any limit point of DFAL iterates is optimal; and for any ∈ > 0, an ∈-optimal and e-feasible solution can be computed within O(log(∈-1)) DFAL iterations, which require O(ψ1.5max/dm-1) proximal gradient computations and communications per node in total, where ψmax denotes the largest eigenvalue of the graph Laplacian, and dmin is the minimum degree of the graph. We also propose an asynchronous version of DFAL by incorporating randomized block coordinate descent methods; and demonstrate the efficiency of DFAL on large scale sparse-group LASSO problems.

Original languageEnglish (US)
Title of host publication32nd International Conference on Machine Learning, ICML 2015
EditorsFrancis Bach, David Blei
PublisherInternational Machine Learning Society (IMLS)
Pages2444-2452
Number of pages9
Volume3
ISBN (Electronic)9781510810587
StatePublished - Jan 1 2015
Event32nd International Conference on Machine Learning, ICML 2015 - Lile, France
Duration: Jul 6 2015Jul 11 2015

Other

Other32nd International Conference on Machine Learning, ICML 2015
CountryFrance
CityLile
Period7/6/157/11/15

Fingerprint

Gradient methods
Convex optimization
Cost functions
Learning systems
Communication
Composite materials

All Science Journal Classification (ASJC) codes

  • Human-Computer Interaction
  • Computer Science Applications

Cite this

Aybat, N. S., Wang, Z., & Iyengar, G. (2015). An asynchronous distributed proximal gradient method for composite convex optimization. In F. Bach, & D. Blei (Eds.), 32nd International Conference on Machine Learning, ICML 2015 (Vol. 3, pp. 2444-2452). International Machine Learning Society (IMLS).
Aybat, Necdet S. ; Wang, Z. ; Iyengar, G. / An asynchronous distributed proximal gradient method for composite convex optimization. 32nd International Conference on Machine Learning, ICML 2015. editor / Francis Bach ; David Blei. Vol. 3 International Machine Learning Society (IMLS), 2015. pp. 2444-2452
@inproceedings{f14d41bcd4354e0fbb9a955d0ead8bb0,
title = "An asynchronous distributed proximal gradient method for composite convex optimization",
abstract = "We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that any limit point of DFAL iterates is optimal; and for any ∈ > 0, an ∈-optimal and e-feasible solution can be computed within O(log(∈-1)) DFAL iterations, which require O(ψ1.5max/dm ∈-1) proximal gradient computations and communications per node in total, where ψmax denotes the largest eigenvalue of the graph Laplacian, and dmin is the minimum degree of the graph. We also propose an asynchronous version of DFAL by incorporating randomized block coordinate descent methods; and demonstrate the efficiency of DFAL on large scale sparse-group LASSO problems.",
author = "Aybat, {Necdet S.} and Z. Wang and G. Iyengar",
year = "2015",
month = "1",
day = "1",
language = "English (US)",
volume = "3",
pages = "2444--2452",
editor = "Francis Bach and David Blei",
booktitle = "32nd International Conference on Machine Learning, ICML 2015",
publisher = "International Machine Learning Society (IMLS)",

}

Aybat, NS, Wang, Z & Iyengar, G 2015, An asynchronous distributed proximal gradient method for composite convex optimization. in F Bach & D Blei (eds), 32nd International Conference on Machine Learning, ICML 2015. vol. 3, International Machine Learning Society (IMLS), pp. 2444-2452, 32nd International Conference on Machine Learning, ICML 2015, Lile, France, 7/6/15.

An asynchronous distributed proximal gradient method for composite convex optimization. / Aybat, Necdet S.; Wang, Z.; Iyengar, G.

32nd International Conference on Machine Learning, ICML 2015. ed. / Francis Bach; David Blei. Vol. 3 International Machine Learning Society (IMLS), 2015. p. 2444-2452.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - An asynchronous distributed proximal gradient method for composite convex optimization

AU - Aybat, Necdet S.

AU - Wang, Z.

AU - Iyengar, G.

PY - 2015/1/1

Y1 - 2015/1/1

N2 - We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that any limit point of DFAL iterates is optimal; and for any ∈ > 0, an ∈-optimal and e-feasible solution can be computed within O(log(∈-1)) DFAL iterations, which require O(ψ1.5max/dm ∈-1) proximal gradient computations and communications per node in total, where ψmax denotes the largest eigenvalue of the graph Laplacian, and dmin is the minimum degree of the graph. We also propose an asynchronous version of DFAL by incorporating randomized block coordinate descent methods; and demonstrate the efficiency of DFAL on large scale sparse-group LASSO problems.

AB - We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that any limit point of DFAL iterates is optimal; and for any ∈ > 0, an ∈-optimal and e-feasible solution can be computed within O(log(∈-1)) DFAL iterations, which require O(ψ1.5max/dm ∈-1) proximal gradient computations and communications per node in total, where ψmax denotes the largest eigenvalue of the graph Laplacian, and dmin is the minimum degree of the graph. We also propose an asynchronous version of DFAL by incorporating randomized block coordinate descent methods; and demonstrate the efficiency of DFAL on large scale sparse-group LASSO problems.

UR - http://www.scopus.com/inward/record.url?scp=84970028395&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84970028395&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84970028395

VL - 3

SP - 2444

EP - 2452

BT - 32nd International Conference on Machine Learning, ICML 2015

A2 - Bach, Francis

A2 - Blei, David

PB - International Machine Learning Society (IMLS)

ER -

Aybat NS, Wang Z, Iyengar G. An asynchronous distributed proximal gradient method for composite convex optimization. In Bach F, Blei D, editors, 32nd International Conference on Machine Learning, ICML 2015. Vol. 3. International Machine Learning Society (IMLS). 2015. p. 2444-2452