Abstract
For smooth and strongly convex optimizations, the optimal iteration complexity of the gradient-based algorithm is O(√κ log 1/ε), where κ is the condition number. In the case that the optimization problem is ill-conditioned, we need to evaluate a large number of full gradients, which could be computationally expensive. In this paper, we propose to remove the dependence on the condition number by allowing the algorithm to access stochastic gradients of the objective function. To this end, we present a novel algorithm named EpochMixed Gradient Descent (EMGD) that is able to utilize two kinds of gradients. A distinctive step in EMGD is the mixed gradient descent, where we use a combination of the full and stochastic gradients to update the intermediate solution. Theoretical analysis shows that EMGD is able to find an ε-optimal solution by computing O(log 1/ε) full gradients and O(κ2 log 1/ε) stochastic gradients.
Original language | English (US) |
---|---|
Journal | Advances in Neural Information Processing Systems |
State | Published - Jan 1 2013 |
Event | 27th Annual Conference on Neural Information Processing Systems, NIPS 2013 - Lake Tahoe, NV, United States Duration: Dec 5 2013 → Dec 10 2013 |
All Science Journal Classification (ASJC) codes
- Computer Networks and Communications
- Information Systems
- Signal Processing