In the last decade, reinforcement learning algorithms have achieved exceptional performance on tasks involving planning, scene interpretation, and exploration. This progress has made the challenging problems of UUV navigation and contact avoidance accessible to a reinforcement learning approach. UUV navigation tasks assume very limited information is available to an agent about its environment. We designed a simulation that realistically restricts the information available to a UUV about its environment, and trained the agent to navigate to a target while avoiding contacts. We found that a Deep Q-Learning with Experience Replay agent was able to learn contact avoidance in an information-limited environment while successfully navigating to an objective.