Results 1  10
of
92
Templates for Convex Cone Problems with Applications to Sparse Signal Recovery
, 2010
"... This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, app ..."
Abstract

Cited by 122 (6 self)
 Add to MetaCart
This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, apply smoothing; and fourth, solve using an optimal firstorder method. A merit of this approach is its flexibility: for example, all compressed sensing problems can be solved via this approach. These include models with objective functionals such as the totalvariation norm, ‖W x‖1 where W is arbitrary, or a combination thereof. In addition, the paper also introduces a number of technical contributions such as a novel continuation scheme, a novel approach for controlling the step size, and some new results showing that the smooth and unsmoothed problems are sometimes formally equivalent. Combined with our framework, these lead to novel, stable and computationally efficient algorithms. For instance, our general implementation is competitive with stateoftheart methods for solving intensively studied problems such as the LASSO. Further, numerical experiments show that one can solve the Dantzig selector problem, for which no efficient largescale solvers exist, in a few hundred iterations. Finally, the paper is accompanied with a software release. This software is not a single, monolithic solver; rather, it is a suite of programs and routines designed to serve as building blocks for constructing complete algorithms. Keywords. Optimal firstorder methods, Nesterov’s accelerated descent algorithms, proximal algorithms, conic duality, smoothing by conjugation, the Dantzig selector, the LASSO, nuclearnorm minimization.
A parallel inertial proximal optimization methods
 Pac. J. Optim
, 2012
"... The DouglasRachford algorithm is a popular iterative method for finding a zero of a sum of two maximally monotone operators defined on a Hilbert space. In this paper, we propose an extension of this algorithm including inertia parameters and develop parallel versions to deal with the case of a sum ..."
Abstract

Cited by 37 (14 self)
 Add to MetaCart
(Show Context)
The DouglasRachford algorithm is a popular iterative method for finding a zero of a sum of two maximally monotone operators defined on a Hilbert space. In this paper, we propose an extension of this algorithm including inertia parameters and develop parallel versions to deal with the case of a sum of an arbitrary number of maximal operators. Based on this algorithm, parallel proximal algorithms are proposed to minimize over a linear subspace of a Hilbert space the sum of a finite number of proper, lower semicontinuous convex functions composed with linear operators. It is shown that particular cases of these methods are the simultaneous direction method of multipliers proposed by Stetzer et al., the parallel proximal algorithm developed by Combettes and Pesquet, and a parallelized version of an algorithm proposed by Attouch and Soueycatt.
Distributed basis pursuit
 IEEE Trans. Sig. Proc
, 2012
"... Abstract—We propose a distributed algorithm for solving the optimization problem Basis Pursuit (BP). BP finds the leastnorm solution of the underdetermined linear system and is used, for example, in compressed sensing for reconstruction. Our algorithm solves BP on a distributed platform such as a s ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
(Show Context)
Abstract—We propose a distributed algorithm for solving the optimization problem Basis Pursuit (BP). BP finds the leastnorm solution of the underdetermined linear system and is used, for example, in compressed sensing for reconstruction. Our algorithm solves BP on a distributed platform such as a sensor network, and is designed to minimize the communication between nodes. The algorithm only requires the network to be connected, has no notion of a central processing node, and no node has access to the entire matrix at any time. We consider two scenarios in which either the columns or the rows of are distributed among the compute nodes. Our algorithm, named DADMM, is a decentralized implementation of the alternating direction method of multipliers. We show through numerical simulation that our algorithm requires considerably less communications between the nodes than the stateoftheart algorithms. Index Terms—Augmented Lagrangian, basis pursuit (BP), distributed optimization, sensor networks.
1 BM3D frames and variational image deblurring
, 1106
"... Abstract—A family of the Block Matching 3D (BM3D) algorithms for various imaging problems has been recently proposed within the framework of nonlocal patchwise image modeling [1], [2]. In this paper we construct analysis and synthesis frames, formalizing the BM3D image modeling and use these frame ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
(Show Context)
Abstract—A family of the Block Matching 3D (BM3D) algorithms for various imaging problems has been recently proposed within the framework of nonlocal patchwise image modeling [1], [2]. In this paper we construct analysis and synthesis frames, formalizing the BM3D image modeling and use these frames to develop novel iterative deblurring algorithms. We consider two different formulations of the deblurring problem: one given by minimization of the single objective function and another based on the Nash equilibrium balance of two objective functions. The latter results in an algorithm where the denoising and deblurring operations are decoupled. The convergence of the developed algorithms is proved. Simulation experiments show that the decoupled algorithm derived from the Nash equilibrium formulation demonstrates the best numerical and visual results and shows superiority with respect to the state of the art in the field, confirming a valuable potential of BM3Dframes as an advanced image modeling tool. I.
A splittingbased iterative algorithm for accelerated statistical Xray CT reconstruction
 Medical Imaging, IEEE Transactions on
, 2012
"... Abstract—Statistical image reconstruction using penalized weighted leastsquares (PWLS) criteria can improve imagequality in Xray CT. However, the huge dynamic range of the statistical weights leads to a highly shiftvariant inverse problem making it difficult to precondition and accelerate existi ..."
Abstract

Cited by 27 (8 self)
 Add to MetaCart
(Show Context)
Abstract—Statistical image reconstruction using penalized weighted leastsquares (PWLS) criteria can improve imagequality in Xray CT. However, the huge dynamic range of the statistical weights leads to a highly shiftvariant inverse problem making it difficult to precondition and accelerate existing iterative algorithms that attack the statistical model directly. We propose to alleviate the problem by using a variablesplitting scheme that separates the shiftvariant and (“nearly”) invariant components of the statistical data model and also decouples the regularization term. This leads to an equivalent constrained problem that we tackle using the classical methodofmultipliers framework with alternating minimization. The specific form of our splitting yields an alternating direction method of multipliers (ADMM) algorithm with an innerstep involving a “nearly ” shiftinvariant linear system that is suitable for FFTbased preconditioning using conetype filters. The proposed method can efficiently handle a variety of convex regularization criteria including smooth edgepreserving regularizers and nonsmooth sparsitypromoting ones based on the ℓ1norm and total variation. Numerical experiments with synthetic and real in vivo human data illustrate that conefilter preconditioners accelerate the proposed ADMM resulting in fast convergence of ADMM compared to conventional (nonlinear conjugate gradient, ordered subsets) and stateoftheart (MFISTA, splitBregman) algorithms that are applicable for CT.
Structured sparsity via alternating directions methods
 JMLR
"... We consider a class of sparse learning problems in high dimensional feature space regularized by a structured sparsityinducing norm that incorporates prior knowledge of the group structure of the features. Such problems often pose a considerable challenge to optimization algorithms due to the nons ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
We consider a class of sparse learning problems in high dimensional feature space regularized by a structured sparsityinducing norm that incorporates prior knowledge of the group structure of the features. Such problems often pose a considerable challenge to optimization algorithms due to the nonsmoothness and nonseparability of the regularization term. In this paper, we focus on two commonly adopted sparsityinducing regularization terms, the overlapping Group Lasso penalty l1/l2norm and the l1/l∞norm. We propose a unified framework based on the augmented Lagrangian method, under which problems with both types of regularization and their variants can be efficiently solved. As one of the core buildingblocks of this framework, we develop new algorithms using a partiallinearization/splitting technique and prove that the accelerated versions of these algorithms require O ( 1 √ ε) iterations to obtain an εoptimal solution. We compare the performance of these algorithms against that of the alternating direction augmented Lagrangian and FISTA methods on a collection of data sets and apply them to two realworld problems to compare the relative merits of the two norms.
1 Accelerated dynamic MRI exploiting sparsity and lowrank structure: kt SLR
"... We introduce a novel algorithm to reconstruct dynamic MRI data from undersampled kt space data. In contrast to classical model based cine MRI schemes that rely on the sparsity or banded structure in Fourier space, we use the compact representation of the data in the Karhunen Louve transform (KLT) ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
(Show Context)
We introduce a novel algorithm to reconstruct dynamic MRI data from undersampled kt space data. In contrast to classical model based cine MRI schemes that rely on the sparsity or banded structure in Fourier space, we use the compact representation of the data in the Karhunen Louve transform (KLT) domain to exploit the correlations in the dataset. The use of the datadependent KL transform makes our approach ideally suited to a range of dynamic imaging problems, even when the motion is not periodic. In comparison to current KLTbased methods that rely on a twostep approach to first estimate the basis functions and then use it for reconstruction, we pose the problem as a spectrally regularized matrix recovery problem. By simultaneously determining the temporal basis functions and its spatial weights from the entire measured data, the proposed scheme is capable of providing high quality reconstructions at a range of accelerations. In addition to using the compact representation in the KLT domain, we also exploit the sparsity of the data to further improve the recovery rate. Validations using numerical phantoms and invivo cardiac perfusion MRI data demonstrate the significant improvement in performance offered by the proposed scheme over existing methods. I.
Total Variation Spatial Regularization for Sparse Hyperspectral Unmixing
 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
, 2012
"... Spectral unmixing aims at estimating the fractional abundances of pure spectral signatures (also called endmembers) in each mixed pixel collected by a remote sensing hyperspectral imaging instrument. In recent work, the linear spectral unmixing problem has been approached in semisupervised fashion a ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
Spectral unmixing aims at estimating the fractional abundances of pure spectral signatures (also called endmembers) in each mixed pixel collected by a remote sensing hyperspectral imaging instrument. In recent work, the linear spectral unmixing problem has been approached in semisupervised fashion as a sparse regression one, under the assumption that the observed image signatures can be expressed as linear combinations of pure spectra, known aprioriand available in a library. It happens, however, that sparse unmixing focuses on analyzing the hyperspectral data without incorporating spatial information. In this paper, we include the total variation (TV) regularization to the classical sparse regression formulation, thus exploiting the spatial– contextual information present in the hyperspectral images and developing a new algorithm called sparse unmixing via variable splitting augmented Lagrangian and TV. Our experimental results, conducted with both simulated and real hyperspectral data sets, indicate the potential of including spatial information (through the TV term) on sparse unmixing formulations for improved characterization of mixed pixels in hyperspectral imagery.
Hyperspectral unmixing based on mixtures of Dirichlet components
 IEEE Transactions on Geoscience and Remote Sensing
"... Abstract—This paper introduces a new unsupervised hyperspectral unmixing method conceived to linear but highly mixed hyperspectral data sets, in which the simplex of minimum volume, usually estimated by the purely geometrically based algorithms, is far way from the true simplex associated with the e ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
(Show Context)
Abstract—This paper introduces a new unsupervised hyperspectral unmixing method conceived to linear but highly mixed hyperspectral data sets, in which the simplex of minimum volume, usually estimated by the purely geometrically based algorithms, is far way from the true simplex associated with the endmembers. The proposed method, an extension of our previous studies, resorts to the statistical framework. The abundance fraction prior is a mixture of Dirichlet densities, thus automatically enforcing the constraints on the abundance fractions imposed by the acquisition process, namely, nonnegativity and sumtoone. A cyclic minimization algorithm is developed where the following are observed: 1) The number of Dirichlet modes is inferred based on the minimum description length principle; 2) a generalized expectation maximization algorithm is derived to infer the model parameters; and 3) a sequence of augmented Lagrangianbased optimizations is used to compute the signatures of the endmembers. Experiments on simulated and real data are presented to show the effectiveness of the proposed algorithm in unmixing problems beyond the reach of the geometrically based stateoftheart competitors. Index Terms—Augmented Lagrangian method of multipliers, blind hyperspectral unmixing, dependent components, generalized expectation maximization (GEM), minimum description length (MDL), mixtures of Dirichlet densities. I.
Proximal algorithms for multicomponent image recovery problems
 Journal of Mathematical Imaging and Vision
, 2010
"... Abstract In recent years, proximal splitting algorithms have been applied to various monocomponent signal and image recovery problems. In this paper, we address the case of multicomponent problems. We first provide closed form expressions for several important multicomponent proximity operators and ..."
Abstract

Cited by 16 (10 self)
 Add to MetaCart
(Show Context)
Abstract In recent years, proximal splitting algorithms have been applied to various monocomponent signal and image recovery problems. In this paper, we address the case of multicomponent problems. We first provide closed form expressions for several important multicomponent proximity operators and then derive extensions of existing proximal algorithms to the multicomponent setting. These results are applied to stereoscopic image recovery, multispectral image denoising, and image decomposition into texture and geometry components.