Training UUV navigation and contact avoidance with reinforcement learning

Eric Homan, Steven Davis, Kenneth Hall, Sarah McClure, John Sustersic, Vijaykrishnan Narayanan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In the last decade, reinforcement learning algorithms have achieved exceptional performance on tasks involving planning, scene interpretation, and exploration. This progress has made the challenging problems of UUV navigation and contact avoidance accessible to a reinforcement learning approach. UUV navigation tasks assume very limited information is available to an agent about its environment. We designed a simulation that realistically restricts the information available to a UUV about its environment, and trained the agent to navigate to a target while avoiding contacts. We found that a Deep Q-Learning with Experience Replay agent was able to learn contact avoidance in an information-limited environment while successfully navigating to an objective.

Original languageEnglish (US)
Title of host publicationOCEANS 2019 MTS/IEEE Seattle, OCEANS 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9780578576183
DOIs
StatePublished - Oct 2019
Event2019 OCEANS MTS/IEEE Seattle, OCEANS 2019 - Seattle, United States
Duration: Oct 27 2019Oct 31 2019

Publication series

NameOCEANS 2019 MTS/IEEE Seattle, OCEANS 2019

Conference

Conference2019 OCEANS MTS/IEEE Seattle, OCEANS 2019
Country/TerritoryUnited States
CitySeattle
Period10/27/1910/31/19

All Science Journal Classification (ASJC) codes

  • Automotive Engineering
  • Ocean Engineering
  • Acoustics and Ultrasonics
  • Fluid Flow and Transfer Processes
  • Oceanography

Fingerprint

Dive into the research topics of 'Training UUV navigation and contact avoidance with reinforcement learning'. Together they form a unique fingerprint.

Cite this