In recent work, we developed a strategy called pose corrected sparsity (PCS) as a means to classify Sonar targets that can capably handle noise and occlusion. PCS incorporates a novel interpretation of a sparsity inducing spike and slab probability distribution towards use as a Bayesian prior for class-speci c discrimination in combination with a dictionary learning scheme for localized patch extractions. While PCS shows much promise for sparsity based SONAR ATR, it cannot handle multi-view and multi-channel ATR scenarios. This is because the observation or test vector is a single SONAR image and sparsity is enforced on the corresponding coe cient vector. A key extension in our future work therefore will be the development of customized multi-task sparsity models that capture matrix and tensor sparsity for SONAR ATR. Such multi-task sparsity models can be invaluable for multi-view and multi-sensor/channel ATR. Synthetic aperture sonar (SAS) images are inherently limited in spatial coverage based on the path of the imaging vehicle. SAS imagery typically comes from a stripmap setting where the vehicle travels in a straight path imaging either one or both perpendicular sides. For sonar ATR, this presents a challenge as objects can look di erent based on how they are arranged to the vehicle. We propose a neural network for sonar ATR that incorporates multiple arti cial looks at the sea-oor. Through wavenumber domain manipulation we claim that one can reverse engineer di erent squint angle images of the sea-oor from SAS images. A neural network can then use these sub-aperture SAS images to garner a three dimensional understanding of targets. Preliminary investigations show promise in limited training scenarios and with actual SAS data. Finally, our investigations will aim to develop novel prior information guided deep learning frameworks for SONAR image quality/resolution enhancement. In a departure from black-box learning approaches, we propose to use expected structure on SONAR images in guiding deep learning frameworks to produce realistic, usable images. Our ultimate goal is the joint design (training) of deep networks to accomplish SONAR image enhancement and ATR, so that quality enhancement is strategically shaped to help with the subsequent classi cation task.
|Effective start/end date||5/1/19 → 5/1/19|
- Office of Naval Research: $325,000.00