Results 1  10
of
100
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
, 2010
"... ..."
(Show Context)
Guaranteed minimumrank solutions of linear matrix equations via nuclear norm minimization
, 2007
"... The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative ..."
Abstract

Cited by 553 (23 self)
 Add to MetaCart
(Show Context)
The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NPhard, because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to solving the norm minimization relaxations, and illustrate our results with numerical examples.
Enhancing Sparsity by Reweighted ℓ1 Minimization
, 2007
"... It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained ℓ1 minimization. In this paper, we study a novel method for sparse signal recovery that in many si ..."
Abstract

Cited by 141 (5 self)
 Add to MetaCart
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained ℓ1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms ℓ1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted ℓ1minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed nearsparsity in overcomplete representations—not by reweighting the ℓ1 norm of the coefficient sequence as is common, but by reweighting the ℓ1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as compressed sensing.
Compressed sensing with quantized measurements
, 2010
"... We consider the problem of estimating a sparse signal from a set of quantized, Gaussian noise corrupted measurements, where each measurement corresponds to an interval of values. We give two methods for (approximately) solving this problem, each based on minimizing a differentiable convex function p ..."
Abstract

Cited by 59 (0 self)
 Add to MetaCart
(Show Context)
We consider the problem of estimating a sparse signal from a set of quantized, Gaussian noise corrupted measurements, where each measurement corresponds to an interval of values. We give two methods for (approximately) solving this problem, each based on minimizing a differentiable convex function plus an regularization term. Using a first order method developed by Hale et al, we demonstrate the performance of the methods through numerical simulation. We find that, using these methods, compressed sensing can be carried out even when the quantization is very coarse, e.g., 1 or 2 bits per measurement.
Lange K: Genomewide Association Analysis by Lasso Penalized Logistic Regression
 Bioinformatics
"... Motivation: In ordinary regression, imposition of a lasso penalty makes continuous model selection straightforward. Lasso penalized regression is particularly advantageous when the number of predictors far exceeds the number of observations. Method: The present paper evaluates the performance of las ..."
Abstract

Cited by 56 (3 self)
 Add to MetaCart
(Show Context)
Motivation: In ordinary regression, imposition of a lasso penalty makes continuous model selection straightforward. Lasso penalized regression is particularly advantageous when the number of predictors far exceeds the number of observations. Method: The present paper evaluates the performance of lasso penalized logistic regression in casecontrol disease gene mapping with a large number of SNP (single nucleotide polymorphisms) predictors. The strength of the lasso penalty can be tuned to select a predetermined number of the most relevant SNPs and other predictors. For a given value of the tuning constant, the penalized likelihood is quickly maximized by cyclic coordinate ascent. Once the most potent marginal predictors are identified, their twoway and higherorder interactions can also be examined by lasso penalized logistic regression. Results: This strategy is tested on both simulated and real data. Our findings on coeliac disease replicate the previous single SNP results and shed light on possible interactions among the SNPs. Availability: The software discussed is available in Mendel 9.0 at the
Nonparametric seismic data recovery with curvelet frames
 Geophysical Journal International
, 2008
"... Seismic data recovery from data with missing traces on otherwise regular acquisition grids forms a crucial step in the seismic processing flow. For instance, unsuccesful recovery leads to imaging artifacts and to erroneous predictions for the multiples, adversely affecting the performance of multipl ..."
Abstract

Cited by 50 (15 self)
 Add to MetaCart
(Show Context)
Seismic data recovery from data with missing traces on otherwise regular acquisition grids forms a crucial step in the seismic processing flow. For instance, unsuccesful recovery leads to imaging artifacts and to erroneous predictions for the multiples, adversely affecting the performance of multiple ellimination. A nonparametric transformbased recovery method is presented that exploits the compression of seismic data volumes by multidimensional expansions with respect to recently developed curvelet frames. The frame elements of these transforms locally resemble wavefronts present in the data and this leads to a compressible signal representation. This compression enables us to formulate a new seismic data recovery algorithm through sparsitypromoting inversion. The concept of sparsitypromoting inversion is in itself not new to the geosciences. However, the recent insights from the field of ‘compressed sensing ’ are new since they identify the conditions that determine successful recovery. These conditions are carefully examined by means of examples geared towards the seismic recovery problem for data with large percentages (>70 %) of traces missing. We show that as long as there is sufficient ’randomness ’ in the acquistion pattern, recovery to within an acceptable error is possible. We also show that our approach compares favor
Sparsity and continuitypromoting seismic image recovery with curvelet frames
, 2008
"... ..."
(Show Context)
On effective methods for implicit piecewise smooth surface recovery
 SIAM J. Scient. Comput
"... Abstract. This paper considers the problem of reconstructing a piecewise smooth model function from given, measured data. The data are compared to a field which is given as a possibly nonlinear function of the model. A regularization functional is added which incorporates the a priori knowledge that ..."
Abstract

Cited by 36 (26 self)
 Add to MetaCart
(Show Context)
Abstract. This paper considers the problem of reconstructing a piecewise smooth model function from given, measured data. The data are compared to a field which is given as a possibly nonlinear function of the model. A regularization functional is added which incorporates the a priori knowledge that the model function is piecewise smooth and may contain jump discontinuities. Regularization operators related to total variation (TV) are therefore employed. Two popular methods are modified TV and Huber’s function. Both contain a parameter which must be selected. The Huber variant provides a more natural approach for selecting its parameter, and we use this to propose a scheme for both methods. Our selected parameter depends both on the resolution and on the model average roughness; thus, it is determined adaptively. Its variation from one iteration to the next yields additional information about the progress of the regularization process. The modified TV operator has a smoother generating function; nonetheless we obtain a Huber variant with comparable, and occasionally better, performance. For large problems (e.g., high resolution) the resulting reconstruction algorithms can be tediously slow. We propose two mechanisms to improve efficiency. The first is a multilevel continuation approach aimed mainly at obtaining a cheap yet good estimate for the regularization parameter and the solution. The second is a special multigrid preconditioner for the conjugate gradient algorithm used to solve the linearized systems encountered in the procedures for recovering the model function.
Enhacing sparsity by reweighted ℓ1 minimization
 Journal of Fourier Analysis and Applications
, 2008
"... It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained ℓ1 minimization. In this paper, we study a novel method for sparse signal recovery that in many si ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
(Show Context)
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained ℓ1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms ℓ1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted ℓ1minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed nearsparsity in overcomplete representations—not by reweighting the ℓ1 norm of the coefficient sequence as is common, but by reweighting the ℓ1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as compressed sensing.
On the robust estimation of power spectra, coherences, and transfer functions
 J. Geophys. Res
, 1987
"... Robust estimation of power spectra, coherences, and transfer functions is investigated in the context of geophysical data processing. The methods described are frequencydomain extensions of current techniques from the statistical iterature and are applicable in cases where sectionaveraging methods ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
(Show Context)
Robust estimation of power spectra, coherences, and transfer functions is investigated in the context of geophysical data processing. The methods described are frequencydomain extensions of current techniques from the statistical iterature and are applicable in cases where sectionaveraging methods would be used with data that are contaminated by local nonstationarity or isolated outliers. The paper begins with a review of robust estimation theory, emphasizing statistical principles and the maximum likelihood or Mestimators. These are combined with sectionaveraging spectral techniques to obtain robust estimates of power spectra, coherences, and transfer functions in an automatic, dataadaptive fashion. Because robust methods implicitly identify abnormal data, methods for monitoring the statistical behavior of the estimation process using quantilequantile plots are also discussed. The results are illustrated using a variety of examples from electromagnetic geophysics.