This paper presents the problem of distributed feedback motion planning for multiple robots. The problem of feedback multi-robot motion planning is formulated as a differential noncooperative game. We leverage the existing sampling-based algorithms and value iterations to develop an incremental policy synthesizer. The proposed algorithm makes use of an iterative best response algorithm to incrementally improve the estimate of value functions of the individual robots in the multi-robot motion-planning setting. We show the asymptotic convergence of the limiting policies induced by the proposed Feedback iNash-Policy algorithm for the underlying non-cooperative game. Furthermore, we show that the value iterations allow estimation of the cost-to-go functions for the robots without the requirement on convergence of the value functions for the sampled graph at any particular iteration.
|Original language||English (US)|
|Number of pages||6|
|State||Published - Oct 1 2015|
|Event||5th IFAC Workshop on Distributed Estimation and Control in Networked Systems, NecSys 2015 - Philadelphia, United States|
Duration: Sep 10 2015 → Sep 11 2015
All Science Journal Classification (ASJC) codes
- Control and Systems Engineering