Distributed learning in large-scale multi-agent games: A modified fictitious play approach

Brian Swenson, Soummya Kar, Joao Xavier

Research output: Chapter in Book/Report/Conference proceedingConference contribution

15 Scopus citations

Abstract

The paper concerns the development of distributed equilibria learning strategies in large-scale multi-agent games with repeated plays. With inter-agent information exchange being restricted to a preassigned communication graph, the paper presents a modified version of the fictitious play algorithm that relies only on local neighborhood information exchange for agent policy update. Under the assumption of identical agent utility functions that are permutation invariant, the proposed distributed algorithm leads to convergence of the networked-averaged empirical play histories to a subset of the Nash equilibria, designated as the consensus equilibria. Applications of the proposed distributed framework to strategy design problems encountered in large-scale traffic networks are discussed.

Original languageEnglish (US)
Title of host publicationConference Record of the 46th Asilomar Conference on Signals, Systems and Computers, ASILOMAR 2012
Pages1490-1495
Number of pages6
DOIs
StatePublished - 2012
Event46th Asilomar Conference on Signals, Systems and Computers, ASILOMAR 2012 - Pacific Grove, CA, United States
Duration: Nov 4 2012Nov 7 2012

Publication series

NameConference Record - Asilomar Conference on Signals, Systems and Computers
ISSN (Print)1058-6393

Other

Other46th Asilomar Conference on Signals, Systems and Computers, ASILOMAR 2012
Country/TerritoryUnited States
CityPacific Grove, CA
Period11/4/1211/7/12

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Distributed learning in large-scale multi-agent games: A modified fictitious play approach'. Together they form a unique fingerprint.

Cite this