Linear convergence with condition number independent access of full gradients

Lijun Zhang, Mehrdad Mahdavi, Rong Jin

Research output: Contribution to journalConference articlepeer-review

69 Citations (SciVal)

Abstract

For smooth and strongly convex optimizations, the optimal iteration complexity of the gradient-based algorithm is O(√κ log 1/ε), where κ is the condition number. In the case that the optimization problem is ill-conditioned, we need to evaluate a large number of full gradients, which could be computationally expensive. In this paper, we propose to remove the dependence on the condition number by allowing the algorithm to access stochastic gradients of the objective function. To this end, we present a novel algorithm named EpochMixed Gradient Descent (EMGD) that is able to utilize two kinds of gradients. A distinctive step in EMGD is the mixed gradient descent, where we use a combination of the full and stochastic gradients to update the intermediate solution. Theoretical analysis shows that EMGD is able to find an ε-optimal solution by computing O(log 1/ε) full gradients and O(κ2 log 1/ε) stochastic gradients.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
StatePublished - Jan 1 2013
Event27th Annual Conference on Neural Information Processing Systems, NIPS 2013 - Lake Tahoe, NV, United States
Duration: Dec 5 2013Dec 10 2013

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Linear convergence with condition number independent access of full gradients'. Together they form a unique fingerprint.

Cite this