A first-order augmented lagrangian method for compressed sensing

N. S. Aybat, G. Iyengar

Research output: Contribution to journalArticle

16 Citations (Scopus)

Abstract

We propose a first-order augmented Lagrangian (FAL) algorithm for solving the basis pursuit problem. FAL computes a solution to this problem by inexactly solving a sequence of l 1-regularized least squares subproblems. These subproblems are solved using an infinite memory proximal gradient algorithm wherein each update reduces to "shrinkage" or constrained "shrinkage." We show that FAL converges to an optimal solution of the basis pursuit problem whenever the solution is unique, which is the case with very high probability for compressed sensing problems. We construct a parameter sequence such that the corresponding FAL iterates are ε-feasible and ε-optimal for all ε > 0 within O (log (ε -1)) FAL iterations. Moreover, FAL requires at most O(ε -1) matrix-vector multiplications of the form Ax or A T y to compute an ε-feasible, ε-optimal solution. We show that FAL can be easily extended to solve the basis pursuit denoising problem when there is a nontrivial level of noise on the measurements. We report the results of numerical experiments comparing FAL with the state-of-the-art solvers for both noisy and noiseless compressed sensing problems. A striking property of FAL that we observed in the numerical experiments with randomly generated instances when there is no measurement noise was that FAL always correctly identifies the support of the target signal without any thresholding or postprocessing, for moderately small error tolerance values.

Original languageEnglish (US)
Pages (from-to)429-459
Number of pages31
JournalSIAM Journal on Optimization
Volume22
Issue number2
DOIs
StatePublished - Sep 7 2012

Fingerprint

Augmented Lagrangians
Augmented Lagrangian Method
Compressed sensing
Augmented Lagrangian
Compressed Sensing
First-order
Basis Pursuit
Experiments
Data storage equipment
Shrinkage
Optimal Solution
Numerical Experiment
Proximal Algorithm
Matrix-vector multiplication
Gradient Algorithm
Thresholding
Denoising
Post-processing
Iterate
Tolerance

All Science Journal Classification (ASJC) codes

  • Software
  • Theoretical Computer Science

Cite this

@article{0cda81f50c554451b325b8edb13d90a1,
title = "A first-order augmented lagrangian method for compressed sensing",
abstract = "We propose a first-order augmented Lagrangian (FAL) algorithm for solving the basis pursuit problem. FAL computes a solution to this problem by inexactly solving a sequence of l 1-regularized least squares subproblems. These subproblems are solved using an infinite memory proximal gradient algorithm wherein each update reduces to {"}shrinkage{"} or constrained {"}shrinkage.{"} We show that FAL converges to an optimal solution of the basis pursuit problem whenever the solution is unique, which is the case with very high probability for compressed sensing problems. We construct a parameter sequence such that the corresponding FAL iterates are ε-feasible and ε-optimal for all ε > 0 within O (log (ε -1)) FAL iterations. Moreover, FAL requires at most O(ε -1) matrix-vector multiplications of the form Ax or A T y to compute an ε-feasible, ε-optimal solution. We show that FAL can be easily extended to solve the basis pursuit denoising problem when there is a nontrivial level of noise on the measurements. We report the results of numerical experiments comparing FAL with the state-of-the-art solvers for both noisy and noiseless compressed sensing problems. A striking property of FAL that we observed in the numerical experiments with randomly generated instances when there is no measurement noise was that FAL always correctly identifies the support of the target signal without any thresholding or postprocessing, for moderately small error tolerance values.",
author = "Aybat, {N. S.} and G. Iyengar",
year = "2012",
month = "9",
day = "7",
doi = "10.1137/100786721",
language = "English (US)",
volume = "22",
pages = "429--459",
journal = "SIAM Journal on Optimization",
issn = "1052-6234",
publisher = "Society for Industrial and Applied Mathematics Publications",
number = "2",

}

A first-order augmented lagrangian method for compressed sensing. / Aybat, N. S.; Iyengar, G.

In: SIAM Journal on Optimization, Vol. 22, No. 2, 07.09.2012, p. 429-459.

Research output: Contribution to journalArticle

TY - JOUR

T1 - A first-order augmented lagrangian method for compressed sensing

AU - Aybat, N. S.

AU - Iyengar, G.

PY - 2012/9/7

Y1 - 2012/9/7

N2 - We propose a first-order augmented Lagrangian (FAL) algorithm for solving the basis pursuit problem. FAL computes a solution to this problem by inexactly solving a sequence of l 1-regularized least squares subproblems. These subproblems are solved using an infinite memory proximal gradient algorithm wherein each update reduces to "shrinkage" or constrained "shrinkage." We show that FAL converges to an optimal solution of the basis pursuit problem whenever the solution is unique, which is the case with very high probability for compressed sensing problems. We construct a parameter sequence such that the corresponding FAL iterates are ε-feasible and ε-optimal for all ε > 0 within O (log (ε -1)) FAL iterations. Moreover, FAL requires at most O(ε -1) matrix-vector multiplications of the form Ax or A T y to compute an ε-feasible, ε-optimal solution. We show that FAL can be easily extended to solve the basis pursuit denoising problem when there is a nontrivial level of noise on the measurements. We report the results of numerical experiments comparing FAL with the state-of-the-art solvers for both noisy and noiseless compressed sensing problems. A striking property of FAL that we observed in the numerical experiments with randomly generated instances when there is no measurement noise was that FAL always correctly identifies the support of the target signal without any thresholding or postprocessing, for moderately small error tolerance values.

AB - We propose a first-order augmented Lagrangian (FAL) algorithm for solving the basis pursuit problem. FAL computes a solution to this problem by inexactly solving a sequence of l 1-regularized least squares subproblems. These subproblems are solved using an infinite memory proximal gradient algorithm wherein each update reduces to "shrinkage" or constrained "shrinkage." We show that FAL converges to an optimal solution of the basis pursuit problem whenever the solution is unique, which is the case with very high probability for compressed sensing problems. We construct a parameter sequence such that the corresponding FAL iterates are ε-feasible and ε-optimal for all ε > 0 within O (log (ε -1)) FAL iterations. Moreover, FAL requires at most O(ε -1) matrix-vector multiplications of the form Ax or A T y to compute an ε-feasible, ε-optimal solution. We show that FAL can be easily extended to solve the basis pursuit denoising problem when there is a nontrivial level of noise on the measurements. We report the results of numerical experiments comparing FAL with the state-of-the-art solvers for both noisy and noiseless compressed sensing problems. A striking property of FAL that we observed in the numerical experiments with randomly generated instances when there is no measurement noise was that FAL always correctly identifies the support of the target signal without any thresholding or postprocessing, for moderately small error tolerance values.

UR - http://www.scopus.com/inward/record.url?scp=84865691267&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84865691267&partnerID=8YFLogxK

U2 - 10.1137/100786721

DO - 10.1137/100786721

M3 - Article

AN - SCOPUS:84865691267

VL - 22

SP - 429

EP - 459

JO - SIAM Journal on Optimization

JF - SIAM Journal on Optimization

SN - 1052-6234

IS - 2

ER -