### Abstract

We propose a first-order augmented Lagrangian (FAL) algorithm for solving the basis pursuit problem. FAL computes a solution to this problem by inexactly solving a sequence of l _{1}-regularized least squares subproblems. These subproblems are solved using an infinite memory proximal gradient algorithm wherein each update reduces to "shrinkage" or constrained "shrinkage." We show that FAL converges to an optimal solution of the basis pursuit problem whenever the solution is unique, which is the case with very high probability for compressed sensing problems. We construct a parameter sequence such that the corresponding FAL iterates are ε-feasible and ε-optimal for all ε > 0 within O (log (ε ^{-1})) FAL iterations. Moreover, FAL requires at most O(ε ^{-1}) matrix-vector multiplications of the form Ax or A ^{T} y to compute an ε-feasible, ε-optimal solution. We show that FAL can be easily extended to solve the basis pursuit denoising problem when there is a nontrivial level of noise on the measurements. We report the results of numerical experiments comparing FAL with the state-of-the-art solvers for both noisy and noiseless compressed sensing problems. A striking property of FAL that we observed in the numerical experiments with randomly generated instances when there is no measurement noise was that FAL always correctly identifies the support of the target signal without any thresholding or postprocessing, for moderately small error tolerance values.

Original language | English (US) |
---|---|

Pages (from-to) | 429-459 |

Number of pages | 31 |

Journal | SIAM Journal on Optimization |

Volume | 22 |

Issue number | 2 |

DOIs | |

State | Published - Sep 7 2012 |

### Fingerprint

### All Science Journal Classification (ASJC) codes

- Software
- Theoretical Computer Science

### Cite this

*SIAM Journal on Optimization*,

*22*(2), 429-459. https://doi.org/10.1137/100786721

}

*SIAM Journal on Optimization*, vol. 22, no. 2, pp. 429-459. https://doi.org/10.1137/100786721

**A first-order augmented lagrangian method for compressed sensing.** / Aybat, N. S.; Iyengar, G.

Research output: Contribution to journal › Article

TY - JOUR

T1 - A first-order augmented lagrangian method for compressed sensing

AU - Aybat, N. S.

AU - Iyengar, G.

PY - 2012/9/7

Y1 - 2012/9/7

N2 - We propose a first-order augmented Lagrangian (FAL) algorithm for solving the basis pursuit problem. FAL computes a solution to this problem by inexactly solving a sequence of l 1-regularized least squares subproblems. These subproblems are solved using an infinite memory proximal gradient algorithm wherein each update reduces to "shrinkage" or constrained "shrinkage." We show that FAL converges to an optimal solution of the basis pursuit problem whenever the solution is unique, which is the case with very high probability for compressed sensing problems. We construct a parameter sequence such that the corresponding FAL iterates are ε-feasible and ε-optimal for all ε > 0 within O (log (ε -1)) FAL iterations. Moreover, FAL requires at most O(ε -1) matrix-vector multiplications of the form Ax or A T y to compute an ε-feasible, ε-optimal solution. We show that FAL can be easily extended to solve the basis pursuit denoising problem when there is a nontrivial level of noise on the measurements. We report the results of numerical experiments comparing FAL with the state-of-the-art solvers for both noisy and noiseless compressed sensing problems. A striking property of FAL that we observed in the numerical experiments with randomly generated instances when there is no measurement noise was that FAL always correctly identifies the support of the target signal without any thresholding or postprocessing, for moderately small error tolerance values.

AB - We propose a first-order augmented Lagrangian (FAL) algorithm for solving the basis pursuit problem. FAL computes a solution to this problem by inexactly solving a sequence of l 1-regularized least squares subproblems. These subproblems are solved using an infinite memory proximal gradient algorithm wherein each update reduces to "shrinkage" or constrained "shrinkage." We show that FAL converges to an optimal solution of the basis pursuit problem whenever the solution is unique, which is the case with very high probability for compressed sensing problems. We construct a parameter sequence such that the corresponding FAL iterates are ε-feasible and ε-optimal for all ε > 0 within O (log (ε -1)) FAL iterations. Moreover, FAL requires at most O(ε -1) matrix-vector multiplications of the form Ax or A T y to compute an ε-feasible, ε-optimal solution. We show that FAL can be easily extended to solve the basis pursuit denoising problem when there is a nontrivial level of noise on the measurements. We report the results of numerical experiments comparing FAL with the state-of-the-art solvers for both noisy and noiseless compressed sensing problems. A striking property of FAL that we observed in the numerical experiments with randomly generated instances when there is no measurement noise was that FAL always correctly identifies the support of the target signal without any thresholding or postprocessing, for moderately small error tolerance values.

UR - http://www.scopus.com/inward/record.url?scp=84865691267&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84865691267&partnerID=8YFLogxK

U2 - 10.1137/100786721

DO - 10.1137/100786721

M3 - Article

AN - SCOPUS:84865691267

VL - 22

SP - 429

EP - 459

JO - SIAM Journal on Optimization

JF - SIAM Journal on Optimization

SN - 1052-6234

IS - 2

ER -