Symbolic target detection in SAR imagery via rotationally invariant-weighted feature extraction

Bradley W. Harris, Michael W. Milo, Michael J. Roan

Research output: Contribution to journalArticlepeer-review

2 Scopus citations


This article introduces the application of a physics-based symbolic image partitioning method to detect targets in synthetic aperture radar (SAR) imagery. 'Targets' in this case refer to vehicular objects which produce a distinct radar return pattern, and have spatial characteristics that are known a priori. The proposed Rotationally Invariant Symbolic Histogram (RISH) detection method co-analyses both target and speckle statistics, and significantly reduces computational requirements by partitioning the data into a discrete number of state representations. RISH requires only one pass for robust detection, unlike other SAR detection methods which rely on difference metrics calculated using multiple passes. To improve performance in high-resolution data, RISH uses a weighted feature extraction algorithm to avoid the common requirement of processing each pixel of the image equally. The weighted structure extracts geometrically undefined and rotationally invariant target features. This article details the analysis of 24 experimentally obtained very high-frequency (VHF)-band SAR magnitude images using this novel approach to SAR target detection. In localizing small (~8.4 m2) foliage-concealed targets, without the aid of pre-processing, this method results in high performance characteristics (90% true positive) with a low Type-II error rate of 6.4 false alarms per 1 × 106 m2. With the addition of change detection, RISH lowers the error rate by 85%.

Original languageEnglish (US)
Pages (from-to)8724-8740
Number of pages17
JournalInternational Journal of Remote Sensing
Issue number24
StatePublished - Dec 2013

All Science Journal Classification (ASJC) codes

  • Earth and Planetary Sciences(all)


Dive into the research topics of 'Symbolic target detection in SAR imagery via rotationally invariant-weighted feature extraction'. Together they form a unique fingerprint.

Cite this