Robot behavioral selection using discrete event language measure

Xi Wang, Jinbo Fu, Peter Lee, Asok Ray

    Research output: Contribution to journalArticle

    2 Citations (Scopus)

    Abstract

    This paper proposes a robot behavioral μ-selection method that maximizes a quantitative measure of languages in the discrete-event setting. This approach complements Q-learning (also called reinforcement learning) that has been widely used in behavioral robotics to learn primitive behaviors. While μ-selection assigns positive and negative weights to the marked states of a deterministic finite-state automaton (DFSA) model of robot operations, Q-learning as-signs reward/penalty on each transition. While the complexity of Q-learning increases exponentially in the number of states and actions, complexity of μ-selection is polynomial in the number of DFSA states. The paper also presents results of simulation experiments for a robotic scenario to demonstrate efficacy of the μ-selection method.

    Original languageEnglish (US)
    Pages (from-to)5126-5131
    Number of pages6
    JournalProceedings of the American Control Conference
    Volume6
    StatePublished - 2004

    Fingerprint

    Finite automata
    Robotics
    Robots
    Reinforcement learning
    Polynomials
    Experiments

    All Science Journal Classification (ASJC) codes

    • Control and Systems Engineering

    Cite this

    @article{28c8b13457e44b3e97e4b492d2baec96,
    title = "Robot behavioral selection using discrete event language measure",
    abstract = "This paper proposes a robot behavioral μ-selection method that maximizes a quantitative measure of languages in the discrete-event setting. This approach complements Q-learning (also called reinforcement learning) that has been widely used in behavioral robotics to learn primitive behaviors. While μ-selection assigns positive and negative weights to the marked states of a deterministic finite-state automaton (DFSA) model of robot operations, Q-learning as-signs reward/penalty on each transition. While the complexity of Q-learning increases exponentially in the number of states and actions, complexity of μ-selection is polynomial in the number of DFSA states. The paper also presents results of simulation experiments for a robotic scenario to demonstrate efficacy of the μ-selection method.",
    author = "Xi Wang and Jinbo Fu and Peter Lee and Asok Ray",
    year = "2004",
    language = "English (US)",
    volume = "6",
    pages = "5126--5131",
    journal = "Proceedings of the American Control Conference",
    issn = "0743-1619",
    publisher = "Institute of Electrical and Electronics Engineers Inc.",

    }

    Robot behavioral selection using discrete event language measure. / Wang, Xi; Fu, Jinbo; Lee, Peter; Ray, Asok.

    In: Proceedings of the American Control Conference, Vol. 6, 2004, p. 5126-5131.

    Research output: Contribution to journalArticle

    TY - JOUR

    T1 - Robot behavioral selection using discrete event language measure

    AU - Wang, Xi

    AU - Fu, Jinbo

    AU - Lee, Peter

    AU - Ray, Asok

    PY - 2004

    Y1 - 2004

    N2 - This paper proposes a robot behavioral μ-selection method that maximizes a quantitative measure of languages in the discrete-event setting. This approach complements Q-learning (also called reinforcement learning) that has been widely used in behavioral robotics to learn primitive behaviors. While μ-selection assigns positive and negative weights to the marked states of a deterministic finite-state automaton (DFSA) model of robot operations, Q-learning as-signs reward/penalty on each transition. While the complexity of Q-learning increases exponentially in the number of states and actions, complexity of μ-selection is polynomial in the number of DFSA states. The paper also presents results of simulation experiments for a robotic scenario to demonstrate efficacy of the μ-selection method.

    AB - This paper proposes a robot behavioral μ-selection method that maximizes a quantitative measure of languages in the discrete-event setting. This approach complements Q-learning (also called reinforcement learning) that has been widely used in behavioral robotics to learn primitive behaviors. While μ-selection assigns positive and negative weights to the marked states of a deterministic finite-state automaton (DFSA) model of robot operations, Q-learning as-signs reward/penalty on each transition. While the complexity of Q-learning increases exponentially in the number of states and actions, complexity of μ-selection is polynomial in the number of DFSA states. The paper also presents results of simulation experiments for a robotic scenario to demonstrate efficacy of the μ-selection method.

    UR - http://www.scopus.com/inward/record.url?scp=8744243178&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=8744243178&partnerID=8YFLogxK

    M3 - Article

    AN - SCOPUS:8744243178

    VL - 6

    SP - 5126

    EP - 5131

    JO - Proceedings of the American Control Conference

    JF - Proceedings of the American Control Conference

    SN - 0743-1619

    ER -