### Abstract

Traditionally, the field of deterministic optimization has been devoted to minimization of functions f(x; θ*) whose parameters, denoted by θ*, are known with certainty. Supervised learning theory on the other hand considers the question of employing training data to seek a function from a set of possible functions. Instances of learning algorithms include regression schemes and support vector machines, amongst others. We consider a hybrid problem of computation and learning that arises in online settings, where one may be interested in optimizing f(x; θ*) while learning θ* through a set of observations. More generally, we consider the solution of parameterized monotone variational inequality problems, which can capture a range of convex optimization problems and convex Nash games. The unknown parameter θ* is learned through the noisy observations of a linear function of θ*, denoted by l(x; θ*). This paper provides convergence statements for joint schemes when observations are corrupted by noise in regimes where the associated variational inequality problem may be either strongly monotone or merely monotone. The proposed schemes are shown to produce iterates that converge in mean to their true counterparts. Numerical results derived from the application of these techniques to convex optimization problems and nonlinear Nash-Cournot games is shown to be promising.

Original language | English (US) |
---|---|

Article number | 6425811 |

Pages (from-to) | 4462-4467 |

Number of pages | 6 |

Journal | Proceedings of the IEEE Conference on Decision and Control |

DOIs | |

State | Published - Dec 1 2012 |

Event | 51st IEEE Conference on Decision and Control, CDC 2012 - Maui, HI, United States Duration: Dec 10 2012 → Dec 13 2012 |

### Fingerprint

### All Science Journal Classification (ASJC) codes

- Control and Systems Engineering
- Modeling and Simulation
- Control and Optimization

### Cite this

}

**On the convergence of joint schemes for online computation and supervised learning.** / Jiang, Hao; Shanbhag, Uday V.

Research output: Contribution to journal › Conference article

TY - JOUR

T1 - On the convergence of joint schemes for online computation and supervised learning

AU - Jiang, Hao

AU - Shanbhag, Uday V.

PY - 2012/12/1

Y1 - 2012/12/1

N2 - Traditionally, the field of deterministic optimization has been devoted to minimization of functions f(x; θ*) whose parameters, denoted by θ*, are known with certainty. Supervised learning theory on the other hand considers the question of employing training data to seek a function from a set of possible functions. Instances of learning algorithms include regression schemes and support vector machines, amongst others. We consider a hybrid problem of computation and learning that arises in online settings, where one may be interested in optimizing f(x; θ*) while learning θ* through a set of observations. More generally, we consider the solution of parameterized monotone variational inequality problems, which can capture a range of convex optimization problems and convex Nash games. The unknown parameter θ* is learned through the noisy observations of a linear function of θ*, denoted by l(x; θ*). This paper provides convergence statements for joint schemes when observations are corrupted by noise in regimes where the associated variational inequality problem may be either strongly monotone or merely monotone. The proposed schemes are shown to produce iterates that converge in mean to their true counterparts. Numerical results derived from the application of these techniques to convex optimization problems and nonlinear Nash-Cournot games is shown to be promising.

AB - Traditionally, the field of deterministic optimization has been devoted to minimization of functions f(x; θ*) whose parameters, denoted by θ*, are known with certainty. Supervised learning theory on the other hand considers the question of employing training data to seek a function from a set of possible functions. Instances of learning algorithms include regression schemes and support vector machines, amongst others. We consider a hybrid problem of computation and learning that arises in online settings, where one may be interested in optimizing f(x; θ*) while learning θ* through a set of observations. More generally, we consider the solution of parameterized monotone variational inequality problems, which can capture a range of convex optimization problems and convex Nash games. The unknown parameter θ* is learned through the noisy observations of a linear function of θ*, denoted by l(x; θ*). This paper provides convergence statements for joint schemes when observations are corrupted by noise in regimes where the associated variational inequality problem may be either strongly monotone or merely monotone. The proposed schemes are shown to produce iterates that converge in mean to their true counterparts. Numerical results derived from the application of these techniques to convex optimization problems and nonlinear Nash-Cournot games is shown to be promising.

UR - http://www.scopus.com/inward/record.url?scp=84874241475&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84874241475&partnerID=8YFLogxK

U2 - 10.1109/CDC.2012.6425811

DO - 10.1109/CDC.2012.6425811

M3 - Conference article

AN - SCOPUS:84874241475

SP - 4462

EP - 4467

JO - Proceedings of the IEEE Conference on Decision and Control

JF - Proceedings of the IEEE Conference on Decision and Control

SN - 0191-2216

M1 - 6425811

ER -