Abstract
Traditionally, the field of deterministic optimization has been devoted to minimization of functions f(x; θ*) whose parameters, denoted by θ*, are known with certainty. Supervised learning theory on the other hand considers the question of employing training data to seek a function from a set of possible functions. Instances of learning algorithms include regression schemes and support vector machines, amongst others. We consider a hybrid problem of computation and learning that arises in online settings, where one may be interested in optimizing f(x; θ*) while learning θ* through a set of observations. More generally, we consider the solution of parameterized monotone variational inequality problems, which can capture a range of convex optimization problems and convex Nash games. The unknown parameter θ* is learned through the noisy observations of a linear function of θ*, denoted by l(x; θ*). This paper provides convergence statements for joint schemes when observations are corrupted by noise in regimes where the associated variational inequality problem may be either strongly monotone or merely monotone. The proposed schemes are shown to produce iterates that converge in mean to their true counterparts. Numerical results derived from the application of these techniques to convex optimization problems and nonlinear Nash-Cournot games is shown to be promising.
Original language | English (US) |
---|---|
Article number | 6425811 |
Pages (from-to) | 4462-4467 |
Number of pages | 6 |
Journal | Proceedings of the IEEE Conference on Decision and Control |
DOIs | |
State | Published - 2012 |
Event | 51st IEEE Conference on Decision and Control, CDC 2012 - Maui, HI, United States Duration: Dec 10 2012 → Dec 13 2012 |
All Science Journal Classification (ASJC) codes
- Control and Systems Engineering
- Modeling and Simulation
- Control and Optimization