On the convergence of joint schemes for online computation and supervised learning

Hao Jiang, Uday V. Shanbhag

Research output: Contribution to journalConference article

Abstract

Traditionally, the field of deterministic optimization has been devoted to minimization of functions f(x; θ*) whose parameters, denoted by θ*, are known with certainty. Supervised learning theory on the other hand considers the question of employing training data to seek a function from a set of possible functions. Instances of learning algorithms include regression schemes and support vector machines, amongst others. We consider a hybrid problem of computation and learning that arises in online settings, where one may be interested in optimizing f(x; θ*) while learning θ* through a set of observations. More generally, we consider the solution of parameterized monotone variational inequality problems, which can capture a range of convex optimization problems and convex Nash games. The unknown parameter θ* is learned through the noisy observations of a linear function of θ*, denoted by l(x; θ*). This paper provides convergence statements for joint schemes when observations are corrupted by noise in regimes where the associated variational inequality problem may be either strongly monotone or merely monotone. The proposed schemes are shown to produce iterates that converge in mean to their true counterparts. Numerical results derived from the application of these techniques to convex optimization problems and nonlinear Nash-Cournot games is shown to be promising.

Original languageEnglish (US)
Article number6425811
Pages (from-to)4462-4467
Number of pages6
JournalProceedings of the IEEE Conference on Decision and Control
DOIs
StatePublished - Dec 1 2012
Event51st IEEE Conference on Decision and Control, CDC 2012 - Maui, HI, United States
Duration: Dec 10 2012Dec 13 2012

Fingerprint

Supervised learning
Supervised Learning
Variational Inequality Problem
Convex Optimization
Convex optimization
Monotone
Game
Optimization Problem
Monotone Variational Inequalities
Learning Theory
Iterate
Unknown Parameters
Linear Function
Learning Algorithm
Support Vector Machine
Regression
Set theory
Learning algorithms
Support vector machines
Converge

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Modeling and Simulation
  • Control and Optimization

Cite this

@article{cdcbcec6edc648618d3ed1642a59d80a,
title = "On the convergence of joint schemes for online computation and supervised learning",
abstract = "Traditionally, the field of deterministic optimization has been devoted to minimization of functions f(x; θ*) whose parameters, denoted by θ*, are known with certainty. Supervised learning theory on the other hand considers the question of employing training data to seek a function from a set of possible functions. Instances of learning algorithms include regression schemes and support vector machines, amongst others. We consider a hybrid problem of computation and learning that arises in online settings, where one may be interested in optimizing f(x; θ*) while learning θ* through a set of observations. More generally, we consider the solution of parameterized monotone variational inequality problems, which can capture a range of convex optimization problems and convex Nash games. The unknown parameter θ* is learned through the noisy observations of a linear function of θ*, denoted by l(x; θ*). This paper provides convergence statements for joint schemes when observations are corrupted by noise in regimes where the associated variational inequality problem may be either strongly monotone or merely monotone. The proposed schemes are shown to produce iterates that converge in mean to their true counterparts. Numerical results derived from the application of these techniques to convex optimization problems and nonlinear Nash-Cournot games is shown to be promising.",
author = "Hao Jiang and Shanbhag, {Uday V.}",
year = "2012",
month = "12",
day = "1",
doi = "10.1109/CDC.2012.6425811",
language = "English (US)",
pages = "4462--4467",
journal = "Proceedings of the IEEE Conference on Decision and Control",
issn = "0191-2216",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

On the convergence of joint schemes for online computation and supervised learning. / Jiang, Hao; Shanbhag, Uday V.

In: Proceedings of the IEEE Conference on Decision and Control, 01.12.2012, p. 4462-4467.

Research output: Contribution to journalConference article

TY - JOUR

T1 - On the convergence of joint schemes for online computation and supervised learning

AU - Jiang, Hao

AU - Shanbhag, Uday V.

PY - 2012/12/1

Y1 - 2012/12/1

N2 - Traditionally, the field of deterministic optimization has been devoted to minimization of functions f(x; θ*) whose parameters, denoted by θ*, are known with certainty. Supervised learning theory on the other hand considers the question of employing training data to seek a function from a set of possible functions. Instances of learning algorithms include regression schemes and support vector machines, amongst others. We consider a hybrid problem of computation and learning that arises in online settings, where one may be interested in optimizing f(x; θ*) while learning θ* through a set of observations. More generally, we consider the solution of parameterized monotone variational inequality problems, which can capture a range of convex optimization problems and convex Nash games. The unknown parameter θ* is learned through the noisy observations of a linear function of θ*, denoted by l(x; θ*). This paper provides convergence statements for joint schemes when observations are corrupted by noise in regimes where the associated variational inequality problem may be either strongly monotone or merely monotone. The proposed schemes are shown to produce iterates that converge in mean to their true counterparts. Numerical results derived from the application of these techniques to convex optimization problems and nonlinear Nash-Cournot games is shown to be promising.

AB - Traditionally, the field of deterministic optimization has been devoted to minimization of functions f(x; θ*) whose parameters, denoted by θ*, are known with certainty. Supervised learning theory on the other hand considers the question of employing training data to seek a function from a set of possible functions. Instances of learning algorithms include regression schemes and support vector machines, amongst others. We consider a hybrid problem of computation and learning that arises in online settings, where one may be interested in optimizing f(x; θ*) while learning θ* through a set of observations. More generally, we consider the solution of parameterized monotone variational inequality problems, which can capture a range of convex optimization problems and convex Nash games. The unknown parameter θ* is learned through the noisy observations of a linear function of θ*, denoted by l(x; θ*). This paper provides convergence statements for joint schemes when observations are corrupted by noise in regimes where the associated variational inequality problem may be either strongly monotone or merely monotone. The proposed schemes are shown to produce iterates that converge in mean to their true counterparts. Numerical results derived from the application of these techniques to convex optimization problems and nonlinear Nash-Cournot games is shown to be promising.

UR - http://www.scopus.com/inward/record.url?scp=84874241475&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84874241475&partnerID=8YFLogxK

U2 - 10.1109/CDC.2012.6425811

DO - 10.1109/CDC.2012.6425811

M3 - Conference article

AN - SCOPUS:84874241475

SP - 4462

EP - 4467

JO - Proceedings of the IEEE Conference on Decision and Control

JF - Proceedings of the IEEE Conference on Decision and Control

SN - 0191-2216

M1 - 6425811

ER -