We study distributed composite optimization over networks: agents minimize a sum of smooth (strongly) convex functions-the agents' sum-utility-plus a nonsmooth (extended-valued) convex one. We propose a general unified algorithmic framework for such a class of problems and provide a convergence analysis leveraging the theory of operator splitting. Distinguishing features of our scheme are: (i) When each of the agent's functions is strongly convex, the algorithm converges at a linear rate, whose dependence on the agents' functions and network topology is decoupled; (ii) When the objective function is convex (but not strongly convex), similar decoupling as in (i) is established for the coefficient of the proved sublinear rate. This also reveals the role of function heterogeneity on the convergence rate. (iii) The algorithm can adjust the ratio between the number of communications and computations to achieve a rate (in terms of computations) independent on the network connectivity; and (iv) A by-product of our analysis is a tuning recommendation for several existing (non-accelerated) distributed algorithms yielding provably faster (worst-case) convergence rate for the class of problems under consideration.
All Science Journal Classification (ASJC) codes
- Signal Processing
- Electrical and Electronic Engineering