Markov games of incomplete information for multi-agent reinforcement learning

Liam Mac Dermed, Charles L. Isbell, Lora G. Weiss

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Scopus citations

Abstract

Partially observable stochastic games (POSGs) are an attractive model for many multi-agent domains, but are computationally extremely difficult to solve. We present a new model, Markov games of incomplete information (MGII) which imposes a mild restriction on POSGs while overcoming their primary computational bottleneck. Finally we show how to convert a MGII into a continuous but bounded fully observable stochastic game. MGIIs represents the most general tractable model for multi-agent reinforcement learning to date.

Original languageEnglish (US)
Title of host publicationInteractive Decision Theory and Game Theory - Papers from the 2011 AAAI Workshop, Technical Report
Pages43-51
Number of pages9
Publication statusPublished - Nov 2 2011
Event2011 AAAI Workshop - San Francisco, CA, United States
Duration: Aug 8 2011Aug 8 2011

Publication series

NameAAAI Workshop - Technical Report
VolumeWS-11-13

Conference

Conference2011 AAAI Workshop
CountryUnited States
CitySan Francisco, CA
Period8/8/118/8/11

    Fingerprint

All Science Journal Classification (ASJC) codes

  • Engineering(all)

Cite this

Mac Dermed, L., Isbell, C. L., & Weiss, L. G. (2011). Markov games of incomplete information for multi-agent reinforcement learning. In Interactive Decision Theory and Game Theory - Papers from the 2011 AAAI Workshop, Technical Report (pp. 43-51). (AAAI Workshop - Technical Report; Vol. WS-11-13).