Multimodal task-driven dictionary learning for image classification

Soheil Bahrampour, Nasser M. Nasrabadi, Asok Ray, William Kenneth Jenkins

Research output: Contribution to journalArticle

72 Citations (Scopus)

Abstract

Dictionary learning algorithms have been successfully used for both reconstructive and discriminative tasks, where an input signal is represented with a sparse linear combination of dictionary atoms. While these methods are mostly developed for single-modality scenarios, recent studies have demonstrated the advantages of feature-level fusion based on the joint sparse representation of the multimodal inputs. In this paper, we propose a multimodal task-driven dictionary learning algorithm under the joint sparsity constraint (prior) to enforce collaborations among multiple homogeneous/heterogeneous sources of information. In this task-driven formulation, the multimodal dictionaries are learned simultaneously with their corresponding classifiers. The resulting multimodal dictionaries can generate discriminative latent features (sparse codes) from the data that are optimized for a given task such as binary or multiclass classification. Moreover, we present an extension of the proposed formulation using a mixed joint and independent sparsity prior, which facilitates more flexible fusion of the modalities at feature level. The efficacy of the proposed algorithms for multimodal classification is illustrated on four different applications - multimodal face recognition, multi-view face recognition, multi-view action recognition, and multimodal biometric recognition. It is also shown that, compared with the counterpart reconstructive-based dictionary learning algorithms, the task-driven formulations are more computationally efficient in the sense that they can be equipped with more compact dictionaries and still achieve superior performance.

Original languageEnglish (US)
Article number7312977
Pages (from-to)24-38
Number of pages15
JournalIEEE Transactions on Image Processing
Volume25
Issue number1
DOIs
StatePublished - Jan 2016

Fingerprint

Image classification
Glossaries
Learning algorithms
Face recognition
Fusion reactions
Biometrics
Classifiers
Atoms

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design

Cite this

Bahrampour, Soheil ; Nasrabadi, Nasser M. ; Ray, Asok ; Jenkins, William Kenneth. / Multimodal task-driven dictionary learning for image classification. In: IEEE Transactions on Image Processing. 2016 ; Vol. 25, No. 1. pp. 24-38.
@article{af2bcec51ce4471686c432c57a3a3e98,
title = "Multimodal task-driven dictionary learning for image classification",
abstract = "Dictionary learning algorithms have been successfully used for both reconstructive and discriminative tasks, where an input signal is represented with a sparse linear combination of dictionary atoms. While these methods are mostly developed for single-modality scenarios, recent studies have demonstrated the advantages of feature-level fusion based on the joint sparse representation of the multimodal inputs. In this paper, we propose a multimodal task-driven dictionary learning algorithm under the joint sparsity constraint (prior) to enforce collaborations among multiple homogeneous/heterogeneous sources of information. In this task-driven formulation, the multimodal dictionaries are learned simultaneously with their corresponding classifiers. The resulting multimodal dictionaries can generate discriminative latent features (sparse codes) from the data that are optimized for a given task such as binary or multiclass classification. Moreover, we present an extension of the proposed formulation using a mixed joint and independent sparsity prior, which facilitates more flexible fusion of the modalities at feature level. The efficacy of the proposed algorithms for multimodal classification is illustrated on four different applications - multimodal face recognition, multi-view face recognition, multi-view action recognition, and multimodal biometric recognition. It is also shown that, compared with the counterpart reconstructive-based dictionary learning algorithms, the task-driven formulations are more computationally efficient in the sense that they can be equipped with more compact dictionaries and still achieve superior performance.",
author = "Soheil Bahrampour and Nasrabadi, {Nasser M.} and Asok Ray and Jenkins, {William Kenneth}",
year = "2016",
month = "1",
doi = "10.1109/TIP.2015.2496275",
language = "English (US)",
volume = "25",
pages = "24--38",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "1",

}

Multimodal task-driven dictionary learning for image classification. / Bahrampour, Soheil; Nasrabadi, Nasser M.; Ray, Asok; Jenkins, William Kenneth.

In: IEEE Transactions on Image Processing, Vol. 25, No. 1, 7312977, 01.2016, p. 24-38.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Multimodal task-driven dictionary learning for image classification

AU - Bahrampour, Soheil

AU - Nasrabadi, Nasser M.

AU - Ray, Asok

AU - Jenkins, William Kenneth

PY - 2016/1

Y1 - 2016/1

N2 - Dictionary learning algorithms have been successfully used for both reconstructive and discriminative tasks, where an input signal is represented with a sparse linear combination of dictionary atoms. While these methods are mostly developed for single-modality scenarios, recent studies have demonstrated the advantages of feature-level fusion based on the joint sparse representation of the multimodal inputs. In this paper, we propose a multimodal task-driven dictionary learning algorithm under the joint sparsity constraint (prior) to enforce collaborations among multiple homogeneous/heterogeneous sources of information. In this task-driven formulation, the multimodal dictionaries are learned simultaneously with their corresponding classifiers. The resulting multimodal dictionaries can generate discriminative latent features (sparse codes) from the data that are optimized for a given task such as binary or multiclass classification. Moreover, we present an extension of the proposed formulation using a mixed joint and independent sparsity prior, which facilitates more flexible fusion of the modalities at feature level. The efficacy of the proposed algorithms for multimodal classification is illustrated on four different applications - multimodal face recognition, multi-view face recognition, multi-view action recognition, and multimodal biometric recognition. It is also shown that, compared with the counterpart reconstructive-based dictionary learning algorithms, the task-driven formulations are more computationally efficient in the sense that they can be equipped with more compact dictionaries and still achieve superior performance.

AB - Dictionary learning algorithms have been successfully used for both reconstructive and discriminative tasks, where an input signal is represented with a sparse linear combination of dictionary atoms. While these methods are mostly developed for single-modality scenarios, recent studies have demonstrated the advantages of feature-level fusion based on the joint sparse representation of the multimodal inputs. In this paper, we propose a multimodal task-driven dictionary learning algorithm under the joint sparsity constraint (prior) to enforce collaborations among multiple homogeneous/heterogeneous sources of information. In this task-driven formulation, the multimodal dictionaries are learned simultaneously with their corresponding classifiers. The resulting multimodal dictionaries can generate discriminative latent features (sparse codes) from the data that are optimized for a given task such as binary or multiclass classification. Moreover, we present an extension of the proposed formulation using a mixed joint and independent sparsity prior, which facilitates more flexible fusion of the modalities at feature level. The efficacy of the proposed algorithms for multimodal classification is illustrated on four different applications - multimodal face recognition, multi-view face recognition, multi-view action recognition, and multimodal biometric recognition. It is also shown that, compared with the counterpart reconstructive-based dictionary learning algorithms, the task-driven formulations are more computationally efficient in the sense that they can be equipped with more compact dictionaries and still achieve superior performance.

UR - http://www.scopus.com/inward/record.url?scp=85004059517&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85004059517&partnerID=8YFLogxK

U2 - 10.1109/TIP.2015.2496275

DO - 10.1109/TIP.2015.2496275

M3 - Article

C2 - 26540686

AN - SCOPUS:85004059517

VL - 25

SP - 24

EP - 38

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

IS - 1

M1 - 7312977

ER -