Stochastic convex optimization with multiple objectives

Mehrdad Mahdavi, Tianbao Yang, Rong Jin

Research output: Contribution to journalConference article

15 Citations (Scopus)

Abstract

In this paper, we are interested in the development of efficient algorithms for convex optimization problems in the simultaneous presence of multiple objectives and stochasticity in the first-order information. We cast the stochastic multiple objective optimization problem into a constrained optimization problem by choosing one function as the objective and try to bound other objectives by appropriate thresholds. We first examine a two stages exploration-exploitation based algorithm which first approximates the stochastic objectives by sampling and then solves a constrained stochastic optimization problem by projected gradient method. This method attains a suboptimal convergence rate even under strong assumption on the objectives. Our second approach is an efficient primal-dual stochastic algorithm. It leverages on the theory of Lagrangian method in constrained optimization and attains the optimal convergence rate of O(1= / √T) in high probability for general Lipschitz continuous objectives.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
StatePublished - Jan 1 2013
Event27th Annual Conference on Neural Information Processing Systems, NIPS 2013 - Lake Tahoe, NV, United States
Duration: Dec 5 2013Dec 10 2013

Fingerprint

Convex optimization
Constrained optimization
Gradient methods
Sampling

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

@article{65982417aaac4e2eb08481c1e857e6c0,
title = "Stochastic convex optimization with multiple objectives",
abstract = "In this paper, we are interested in the development of efficient algorithms for convex optimization problems in the simultaneous presence of multiple objectives and stochasticity in the first-order information. We cast the stochastic multiple objective optimization problem into a constrained optimization problem by choosing one function as the objective and try to bound other objectives by appropriate thresholds. We first examine a two stages exploration-exploitation based algorithm which first approximates the stochastic objectives by sampling and then solves a constrained stochastic optimization problem by projected gradient method. This method attains a suboptimal convergence rate even under strong assumption on the objectives. Our second approach is an efficient primal-dual stochastic algorithm. It leverages on the theory of Lagrangian method in constrained optimization and attains the optimal convergence rate of O(1= / √T) in high probability for general Lipschitz continuous objectives.",
author = "Mehrdad Mahdavi and Tianbao Yang and Rong Jin",
year = "2013",
month = "1",
day = "1",
language = "English (US)",
journal = "Advances in Neural Information Processing Systems",
issn = "1049-5258",

}

Stochastic convex optimization with multiple objectives. / Mahdavi, Mehrdad; Yang, Tianbao; Jin, Rong.

In: Advances in Neural Information Processing Systems, 01.01.2013.

Research output: Contribution to journalConference article

TY - JOUR

T1 - Stochastic convex optimization with multiple objectives

AU - Mahdavi, Mehrdad

AU - Yang, Tianbao

AU - Jin, Rong

PY - 2013/1/1

Y1 - 2013/1/1

N2 - In this paper, we are interested in the development of efficient algorithms for convex optimization problems in the simultaneous presence of multiple objectives and stochasticity in the first-order information. We cast the stochastic multiple objective optimization problem into a constrained optimization problem by choosing one function as the objective and try to bound other objectives by appropriate thresholds. We first examine a two stages exploration-exploitation based algorithm which first approximates the stochastic objectives by sampling and then solves a constrained stochastic optimization problem by projected gradient method. This method attains a suboptimal convergence rate even under strong assumption on the objectives. Our second approach is an efficient primal-dual stochastic algorithm. It leverages on the theory of Lagrangian method in constrained optimization and attains the optimal convergence rate of O(1= / √T) in high probability for general Lipschitz continuous objectives.

AB - In this paper, we are interested in the development of efficient algorithms for convex optimization problems in the simultaneous presence of multiple objectives and stochasticity in the first-order information. We cast the stochastic multiple objective optimization problem into a constrained optimization problem by choosing one function as the objective and try to bound other objectives by appropriate thresholds. We first examine a two stages exploration-exploitation based algorithm which first approximates the stochastic objectives by sampling and then solves a constrained stochastic optimization problem by projected gradient method. This method attains a suboptimal convergence rate even under strong assumption on the objectives. Our second approach is an efficient primal-dual stochastic algorithm. It leverages on the theory of Lagrangian method in constrained optimization and attains the optimal convergence rate of O(1= / √T) in high probability for general Lipschitz continuous objectives.

UR - http://www.scopus.com/inward/record.url?scp=84898992535&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84898992535&partnerID=8YFLogxK

M3 - Conference article

JO - Advances in Neural Information Processing Systems

JF - Advances in Neural Information Processing Systems

SN - 1049-5258

ER -