The objective of this research is to consider the solution of convex optimization and variational inequality problems complicated by misspecified parameters. A possibly naive sequential approach would first (i) estimate the problem parameters accurately through learning; and then (ii) solve the associated computational problem. Unfortunately, as the learning problems grow in size and complexity, such a framework is hampered by the significant increase in solution time to obtain accurate parameter estimates. Furthermore, any parameter estimation error propagates to the computational problem, possibly with devastating impact. Accordingly, on contrary to the naïve sequential approach, this research will aim to develop gradient-based algorithms that can simultaneously learn the misspecified parameter and solve the computational problem corresponding to the correct parameter. More generally, these coupled schemes will be shown to be capable of contending with problem intricacies such as uncertainty and nonsmoothness. Importantly, this methodology will cope with situations where the observations used in the learning problem may depend on the computational process. The research will emphasize the development of global convergence statements, iteration complexity results, regret bounds with reference to offline algorithms, and trade-offs between exploration and exploitation.
If successful, this research will find impact at several levels. At a fundamental level, this research is expected to lead to new truly adaptive algorithms that can both learn parameters and optimize the associated systems simultaneously. The algorithms will make impact through addressing a wide range of large-scale application problems that are complicated by misspecification, including portfolio selection problems and distributed optimization of networked systems. Finally, the educational component of this project will comprise of undergraduate research projects and high school course modules.
|Effective start/end date||8/1/14 → 7/31/18|
- National Science Foundation: $300,000.00