Robot behavioral selection using discrete event language measure

Xi Wang, Jinbo Fu, Peter Lee, Asok Ray

    Research output: Contribution to journalArticle

    2 Scopus citations

    Abstract

    This paper proposes a robot behavioral μ-selection method that maximizes a quantitative measure of languages in the discrete-event setting. This approach complements Q-learning (also called reinforcement learning) that has been widely used in behavioral robotics to learn primitive behaviors. While μ-selection assigns positive and negative weights to the marked states of a deterministic finite-state automaton (DFSA) model of robot operations, Q-learning as-signs reward/penalty on each transition. While the complexity of Q-learning increases exponentially in the number of states and actions, complexity of μ-selection is polynomial in the number of DFSA states. The paper also presents results of simulation experiments for a robotic scenario to demonstrate efficacy of the μ-selection method.

    Original languageEnglish (US)
    Pages (from-to)5126-5131
    Number of pages6
    JournalProceedings of the American Control Conference
    Volume6
    StatePublished - 2004

    All Science Journal Classification (ASJC) codes

    • Control and Systems Engineering

    Fingerprint Dive into the research topics of 'Robot behavioral selection using discrete event language measure'. Together they form a unique fingerprint.

  • Cite this