TY - JOUR
T1 - A Novel Framework for Evaluating Performance-Estimation Models
AU - Williams, David P.
N1 - Funding Information:
Dr. Williams was a recipient of a James B. Duke Graduate Fellowship, a National Defense Science and Engineering Graduate Fellowship, and the Excellence in Review by the IEEE JOURNAL OF OCEANIC ENGINEERING twice.
Funding Information:
Manuscript received June 14, 2018; revised November 5, 2018 and January 3, 2019; accepted January 30, 2019. Date of publication March 6, 2019; date of current version July 22, 2019. This work was supported by the United States Office of Naval Research and the NATO Allied Command Transformation.
Publisher Copyright:
© 1980-2012 IEEE.
PY - 2019
Y1 - 2019
N2 - A general framework for quantifying the worth of a performance-estimation model is proposed. The purpose of the model is to predict the performance of an automatic target recognition algorithm on a given set of test data, while the purpose of the framework is to quantify how well the model fulfills its task. To this end, a quantity referred to as the utility, which is based on the Kullback-Leibler divergence, is introduced. A key aspect of the framework is the inclusion of a significance function that specifies the relative importance of each point in the performance space, here assumed to be defined in terms of false alarm rate and probability of detection. Example significance functions are suggested and discussed. The functionality of the proposed framework is demonstrated on an underwater target detection application involving measured synthetic aperture sonar data. In this context, an image complexity metric is exploited to enable the development of models corresponding to different seafloor conditions and mine-hunting difficulty. The appeal of the framework is its ability to quantitatively assess the utility of competing performance-estimation models and to fairly compare the utility of a model on different test data sets.
AB - A general framework for quantifying the worth of a performance-estimation model is proposed. The purpose of the model is to predict the performance of an automatic target recognition algorithm on a given set of test data, while the purpose of the framework is to quantify how well the model fulfills its task. To this end, a quantity referred to as the utility, which is based on the Kullback-Leibler divergence, is introduced. A key aspect of the framework is the inclusion of a significance function that specifies the relative importance of each point in the performance space, here assumed to be defined in terms of false alarm rate and probability of detection. Example significance functions are suggested and discussed. The functionality of the proposed framework is demonstrated on an underwater target detection application involving measured synthetic aperture sonar data. In this context, an image complexity metric is exploited to enable the development of models corresponding to different seafloor conditions and mine-hunting difficulty. The appeal of the framework is its ability to quantitatively assess the utility of competing performance-estimation models and to fairly compare the utility of a model on different test data sets.
UR - http://www.scopus.com/inward/record.url?scp=85069794291&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85069794291&partnerID=8YFLogxK
U2 - 10.1109/TGRS.2019.2898352
DO - 10.1109/TGRS.2019.2898352
M3 - Article
AN - SCOPUS:85069794291
SN - 0196-2892
VL - 57
SP - 5285
EP - 5302
JO - IEEE Transactions on Geoscience and Remote Sensing
JF - IEEE Transactions on Geoscience and Remote Sensing
IS - 8
M1 - 8661781
ER -