Lower and upper bounds on the generalization of stochastic exponentially concave optimization

Mehrdad Mahdavi, Lijun Zhang, Rong Jin

Research output: Contribution to journalConference article

1 Citation (Scopus)

Abstract

In this paper we derive high probability lower and upper bounds on the excess risk of stochastic optimization of exponentially concave loss functions. Exponentially concave loss functions encompass several fundamental problems in machine learning such as squared loss in linear regression, logistic loss in classification, and negative logarithm loss in portfolio management. We demonstrate an O(d log T/T ) upper bound on the excess risk of stochastic online Newton step algorithm, and an O(d/T ) lower bound on the excess risk of any stochastic optimization method for squared loss, indicating that the obtained upper bound is optimal up to a logarithmic factor. The analysis of upper bound is based on recent advances in concentration inequalities for bounding self-normalized martingales, which is interesting by its own right, and the proof technique used to achieve the lower bound is a probabilistic method and relies on an information-theoretic minimax analysis.

Original languageEnglish (US)
JournalJournal of Machine Learning Research
Volume40
Issue number2015
StatePublished - Jan 1 2015
Event28th Conference on Learning Theory, COLT 2015 - Paris, France
Duration: Jul 2 2015Jul 6 2015

Fingerprint

Upper and Lower Bounds
Excess
Optimization
Concave function
Stochastic Optimization
Loss Function
Upper bound
Lower bound
Concentration Inequalities
Portfolio Management
Probabilistic Methods
Stochastic Methods
Linear regression
Minimax
Logarithm
Martingale
Logistics
Optimization Methods
Machine Learning
Logarithmic

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Cite this

@article{a49f4ae70e804c35bd823958078c068e,
title = "Lower and upper bounds on the generalization of stochastic exponentially concave optimization",
abstract = "In this paper we derive high probability lower and upper bounds on the excess risk of stochastic optimization of exponentially concave loss functions. Exponentially concave loss functions encompass several fundamental problems in machine learning such as squared loss in linear regression, logistic loss in classification, and negative logarithm loss in portfolio management. We demonstrate an O(d log T/T ) upper bound on the excess risk of stochastic online Newton step algorithm, and an O(d/T ) lower bound on the excess risk of any stochastic optimization method for squared loss, indicating that the obtained upper bound is optimal up to a logarithmic factor. The analysis of upper bound is based on recent advances in concentration inequalities for bounding self-normalized martingales, which is interesting by its own right, and the proof technique used to achieve the lower bound is a probabilistic method and relies on an information-theoretic minimax analysis.",
author = "Mehrdad Mahdavi and Lijun Zhang and Rong Jin",
year = "2015",
month = "1",
day = "1",
language = "English (US)",
volume = "40",
journal = "Journal of Machine Learning Research",
issn = "1532-4435",
publisher = "Microtome Publishing",
number = "2015",

}

Lower and upper bounds on the generalization of stochastic exponentially concave optimization. / Mahdavi, Mehrdad; Zhang, Lijun; Jin, Rong.

In: Journal of Machine Learning Research, Vol. 40, No. 2015, 01.01.2015.

Research output: Contribution to journalConference article

TY - JOUR

T1 - Lower and upper bounds on the generalization of stochastic exponentially concave optimization

AU - Mahdavi, Mehrdad

AU - Zhang, Lijun

AU - Jin, Rong

PY - 2015/1/1

Y1 - 2015/1/1

N2 - In this paper we derive high probability lower and upper bounds on the excess risk of stochastic optimization of exponentially concave loss functions. Exponentially concave loss functions encompass several fundamental problems in machine learning such as squared loss in linear regression, logistic loss in classification, and negative logarithm loss in portfolio management. We demonstrate an O(d log T/T ) upper bound on the excess risk of stochastic online Newton step algorithm, and an O(d/T ) lower bound on the excess risk of any stochastic optimization method for squared loss, indicating that the obtained upper bound is optimal up to a logarithmic factor. The analysis of upper bound is based on recent advances in concentration inequalities for bounding self-normalized martingales, which is interesting by its own right, and the proof technique used to achieve the lower bound is a probabilistic method and relies on an information-theoretic minimax analysis.

AB - In this paper we derive high probability lower and upper bounds on the excess risk of stochastic optimization of exponentially concave loss functions. Exponentially concave loss functions encompass several fundamental problems in machine learning such as squared loss in linear regression, logistic loss in classification, and negative logarithm loss in portfolio management. We demonstrate an O(d log T/T ) upper bound on the excess risk of stochastic online Newton step algorithm, and an O(d/T ) lower bound on the excess risk of any stochastic optimization method for squared loss, indicating that the obtained upper bound is optimal up to a logarithmic factor. The analysis of upper bound is based on recent advances in concentration inequalities for bounding self-normalized martingales, which is interesting by its own right, and the proof technique used to achieve the lower bound is a probabilistic method and relies on an information-theoretic minimax analysis.

UR - http://www.scopus.com/inward/record.url?scp=84984698465&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84984698465&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:84984698465

VL - 40

JO - Journal of Machine Learning Research

JF - Journal of Machine Learning Research

SN - 1532-4435

IS - 2015

ER -