A distributed adaptive steplength stochastic approximation method for monotone stochastic Nash Games

Farzad Yousefian, Angelia Nedic, Vinayak V. Shanbhag

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)

Abstract

We consider a distributed stochastic approximation (SA) scheme for computing an equilibrium of a stochastic Nash game. Standard SA schemes employ diminishing steplength sequences that are square summable but not summable. Such requirements provide a little or no guidance for how to leverage Lipschitzian and monotonicity properties of the problem and naive choices (such as γk = 1/k) generally do not preform uniformly well on a breadth of problems. While a centralized adaptive stepsize SA scheme is proposed in [1] for the optimization framework, such a scheme provides no freedom for the agents in choosing their own stepsizes. Thus, a direct application of centralized stepsize schemes is impractical in solving Nash games. Furthermore, extensions to game-theoretic regimes where players may independently choose steplength sequences are limited to recent work by Koshal et al. [2]. Motivated by these shortcomings, we present a distributed algorithm in which each player updates his steplength based on the previous steplength and some problem parameters. The steplength rules are derived from minimizing an upper bound of the errors associated with players' decisions. It is shown that these rules generate sequences that converge almost surely to an equilibrium of the stochastic Nash game. Importantly, variants of this rule are suggested where players independently select steplength sequences while abiding by an overall coordination requirement. Preliminary numerical results are seen to be promising.

Original languageEnglish (US)
Title of host publication2013 American Control Conference, ACC 2013
Pages4765-4770
Number of pages6
StatePublished - Sep 11 2013
Event2013 1st American Control Conference, ACC 2013 - Washington, DC, United States
Duration: Jun 17 2013Jun 19 2013

Publication series

NameProceedings of the American Control Conference
ISSN (Print)0743-1619

Other

Other2013 1st American Control Conference, ACC 2013
CountryUnited States
CityWashington, DC
Period6/17/136/19/13

Fingerprint

Parallel algorithms

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Cite this

Yousefian, F., Nedic, A., & Shanbhag, V. V. (2013). A distributed adaptive steplength stochastic approximation method for monotone stochastic Nash Games. In 2013 American Control Conference, ACC 2013 (pp. 4765-4770). [6580575] (Proceedings of the American Control Conference).
Yousefian, Farzad ; Nedic, Angelia ; Shanbhag, Vinayak V. / A distributed adaptive steplength stochastic approximation method for monotone stochastic Nash Games. 2013 American Control Conference, ACC 2013. 2013. pp. 4765-4770 (Proceedings of the American Control Conference).
@inproceedings{3fbbd2a5a2af44e383f77d6e128ddf8b,
title = "A distributed adaptive steplength stochastic approximation method for monotone stochastic Nash Games",
abstract = "We consider a distributed stochastic approximation (SA) scheme for computing an equilibrium of a stochastic Nash game. Standard SA schemes employ diminishing steplength sequences that are square summable but not summable. Such requirements provide a little or no guidance for how to leverage Lipschitzian and monotonicity properties of the problem and naive choices (such as γk = 1/k) generally do not preform uniformly well on a breadth of problems. While a centralized adaptive stepsize SA scheme is proposed in [1] for the optimization framework, such a scheme provides no freedom for the agents in choosing their own stepsizes. Thus, a direct application of centralized stepsize schemes is impractical in solving Nash games. Furthermore, extensions to game-theoretic regimes where players may independently choose steplength sequences are limited to recent work by Koshal et al. [2]. Motivated by these shortcomings, we present a distributed algorithm in which each player updates his steplength based on the previous steplength and some problem parameters. The steplength rules are derived from minimizing an upper bound of the errors associated with players' decisions. It is shown that these rules generate sequences that converge almost surely to an equilibrium of the stochastic Nash game. Importantly, variants of this rule are suggested where players independently select steplength sequences while abiding by an overall coordination requirement. Preliminary numerical results are seen to be promising.",
author = "Farzad Yousefian and Angelia Nedic and Shanbhag, {Vinayak V.}",
year = "2013",
month = "9",
day = "11",
language = "English (US)",
isbn = "9781479901777",
series = "Proceedings of the American Control Conference",
pages = "4765--4770",
booktitle = "2013 American Control Conference, ACC 2013",

}

Yousefian, F, Nedic, A & Shanbhag, VV 2013, A distributed adaptive steplength stochastic approximation method for monotone stochastic Nash Games. in 2013 American Control Conference, ACC 2013., 6580575, Proceedings of the American Control Conference, pp. 4765-4770, 2013 1st American Control Conference, ACC 2013, Washington, DC, United States, 6/17/13.

A distributed adaptive steplength stochastic approximation method for monotone stochastic Nash Games. / Yousefian, Farzad; Nedic, Angelia; Shanbhag, Vinayak V.

2013 American Control Conference, ACC 2013. 2013. p. 4765-4770 6580575 (Proceedings of the American Control Conference).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - A distributed adaptive steplength stochastic approximation method for monotone stochastic Nash Games

AU - Yousefian, Farzad

AU - Nedic, Angelia

AU - Shanbhag, Vinayak V.

PY - 2013/9/11

Y1 - 2013/9/11

N2 - We consider a distributed stochastic approximation (SA) scheme for computing an equilibrium of a stochastic Nash game. Standard SA schemes employ diminishing steplength sequences that are square summable but not summable. Such requirements provide a little or no guidance for how to leverage Lipschitzian and monotonicity properties of the problem and naive choices (such as γk = 1/k) generally do not preform uniformly well on a breadth of problems. While a centralized adaptive stepsize SA scheme is proposed in [1] for the optimization framework, such a scheme provides no freedom for the agents in choosing their own stepsizes. Thus, a direct application of centralized stepsize schemes is impractical in solving Nash games. Furthermore, extensions to game-theoretic regimes where players may independently choose steplength sequences are limited to recent work by Koshal et al. [2]. Motivated by these shortcomings, we present a distributed algorithm in which each player updates his steplength based on the previous steplength and some problem parameters. The steplength rules are derived from minimizing an upper bound of the errors associated with players' decisions. It is shown that these rules generate sequences that converge almost surely to an equilibrium of the stochastic Nash game. Importantly, variants of this rule are suggested where players independently select steplength sequences while abiding by an overall coordination requirement. Preliminary numerical results are seen to be promising.

AB - We consider a distributed stochastic approximation (SA) scheme for computing an equilibrium of a stochastic Nash game. Standard SA schemes employ diminishing steplength sequences that are square summable but not summable. Such requirements provide a little or no guidance for how to leverage Lipschitzian and monotonicity properties of the problem and naive choices (such as γk = 1/k) generally do not preform uniformly well on a breadth of problems. While a centralized adaptive stepsize SA scheme is proposed in [1] for the optimization framework, such a scheme provides no freedom for the agents in choosing their own stepsizes. Thus, a direct application of centralized stepsize schemes is impractical in solving Nash games. Furthermore, extensions to game-theoretic regimes where players may independently choose steplength sequences are limited to recent work by Koshal et al. [2]. Motivated by these shortcomings, we present a distributed algorithm in which each player updates his steplength based on the previous steplength and some problem parameters. The steplength rules are derived from minimizing an upper bound of the errors associated with players' decisions. It is shown that these rules generate sequences that converge almost surely to an equilibrium of the stochastic Nash game. Importantly, variants of this rule are suggested where players independently select steplength sequences while abiding by an overall coordination requirement. Preliminary numerical results are seen to be promising.

UR - http://www.scopus.com/inward/record.url?scp=84883524862&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84883524862&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84883524862

SN - 9781479901777

T3 - Proceedings of the American Control Conference

SP - 4765

EP - 4770

BT - 2013 American Control Conference, ACC 2013

ER -

Yousefian F, Nedic A, Shanbhag VV. A distributed adaptive steplength stochastic approximation method for monotone stochastic Nash Games. In 2013 American Control Conference, ACC 2013. 2013. p. 4765-4770. 6580575. (Proceedings of the American Control Conference).