Results 1  10
of
82
Guaranteed minimumrank solutions of linear matrix equations via nuclear norm minimization
, 2007
"... The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative ..."
Abstract

Cited by 218 (15 self)
 Add to MetaCart
The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NPhard, because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to solving the norm minimization relaxations, and illustrate our results with numerical examples.
Sparsest solutions of underdetermined linear systems via ℓ
"... We present a condition on the matrix of an underdetermined linear system which guarantees that the solution of the system with minimal ℓqquasinorm is also the sparsest one. This generalizes, and sightly improves, a similar result for the ℓ1norm. We then introduce a simple numerical scheme to compu ..."
Abstract

Cited by 77 (8 self)
 Add to MetaCart
We present a condition on the matrix of an underdetermined linear system which guarantees that the solution of the system with minimal ℓqquasinorm is also the sparsest one. This generalizes, and sightly improves, a similar result for the ℓ1norm. We then introduce a simple numerical scheme to compute solutions with minimal ℓqquasinorm, and we study its convergence. Finally, we display the results of some experiments which indicate that the ℓqmethod performs better than other available methods. 1
Sparse Representation For Computer Vision and Pattern Recognition
, 2009
"... Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on nontraditional applications where the goal is not just to obtain a compact highfidelity representation of the observed signal, but also to extract semantic information. The choice of ..."
Abstract

Cited by 44 (1 self)
 Add to MetaCart
Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on nontraditional applications where the goal is not just to obtain a compact highfidelity representation of the observed signal, but also to extract semantic information. The choice of dictionary plays a key role in bridging this gap: unconventional dictionaries consisting of, or learned from, the training samples themselves provide the key to obtaining stateoftheart results and to attaching semantic meaning to sparse signal representations. Understanding the good performance of such unconventional dictionaries in turn demands new algorithmic and analytical techniques. This review paper highlights a few representative examples of how the interaction between sparse signal representation and computer vision can enrich both fields, and raises a number of open questions for further study.
Analysis of multistage convex relaxation for sparse regularization
 Journal of Machine Learning Research
"... We consider learning formulations with nonconvex objective functions that often occur in practical applications. There are two approaches to this problem: • Heuristic methods such as gradient descent that only find a local minimum. A drawback of this approach is the lack of theoretical guarantee sh ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
We consider learning formulations with nonconvex objective functions that often occur in practical applications. There are two approaches to this problem: • Heuristic methods such as gradient descent that only find a local minimum. A drawback of this approach is the lack of theoretical guarantee showing that the local minimum gives a good solution. • Convex relaxation such as L1regularization that solves the problem under some conditions. However it often leads to a suboptimal solution in reality. This paper tries to remedy the above gap between theory and practice. In particular, we present a multistage convex relaxation scheme for solving problems with nonconvex objective functions. For learning formulations with sparse regularization, we analyze the behavior of a specific multistage relaxation scheme. Under appropriate conditions, we show that the local solution obtained by this procedure is superior to the global solution of the standard L1 convex relaxation for learning sparse targets.
Group sparse coding with a laplacian scale mixture prior
 Zemel,R.,andCulotta,A.,editors,Advances in Neural Information Processing Systems
, 2010
"... We propose a class of sparse coding models that utilizes a Laplacian Scale Mixture (LSM) prior to model dependencies among coefficients. Each coefficient is modeled as a Laplacian distribution with a variable scale parameter, with a Gamma distribution prior over the scale parameter. We show that, du ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
We propose a class of sparse coding models that utilizes a Laplacian Scale Mixture (LSM) prior to model dependencies among coefficients. Each coefficient is modeled as a Laplacian distribution with a variable scale parameter, with a Gamma distribution prior over the scale parameter. We show that, due to the conjugacy of the Gamma prior, it is possible to derive efficient inference procedures for both the coefficients and the scale parameter. When the scale parameters of a group of coefficients are combined into a single variable, it is possible to describe the dependencies that occur due to common amplitude fluctuations among coefficients, which have been shown to constitute a large fraction of the redundancy in natural images [1]. We show that, as a consequence of this group sparse coding, the resulting inference of the coefficients follows a divisive normalization rule, and that this may be efficiently implemented in a network architecture similar to that which has been proposed to occur in primary visual cortex. We also demonstrate improvements in image coding and compressive sensing recovery using the LSM model. 1
Reweighted nuclear norm minimization with application to system identification
 Proc. American Control Conference
, 2010
"... Abstract—The matrix rank minimization problem consists of finding a matrix of minimum rank that satisfies given convex constraints. It is NPhard in general and has applications in control, system identification, and machine learning. Reweighted trace minimization has been considered as an iterative ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
Abstract—The matrix rank minimization problem consists of finding a matrix of minimum rank that satisfies given convex constraints. It is NPhard in general and has applications in control, system identification, and machine learning. Reweighted trace minimization has been considered as an iterative heuristic for this problem. In this paper, we analyze the convergence of this iterative heuristic, showing that the difference between successive iterates tends to zero. Then, after reformulating the heuristic as reweighted nuclear norm minimization, we propose an efficient gradientbased implementation that takes advantage of the new formulation and opens the way to solving largescale problems. We apply this algorithm to the problem of loworder system identification from inputoutput data. Numerical examples demonstrate that the reweighted nuclear norm minimization makes model order selection easier and results in lower order models compared to nuclear norm minimization without weights. A. Background I.
Sparse LMS for system identification
 in Proc. IEEE ICASSP
, 2009
"... We propose a new approach to adaptive system identification when the system model is sparse. The approach applies the ℓ1 relaxation, common in compressive sensing, to improve the performance of LMStype adaptive methods. This results in two new algorithms, the ZeroAttracting LMS (ZALMS) and the Re ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
We propose a new approach to adaptive system identification when the system model is sparse. The approach applies the ℓ1 relaxation, common in compressive sensing, to improve the performance of LMStype adaptive methods. This results in two new algorithms, the ZeroAttracting LMS (ZALMS) and the Reweighted ZeroAttracting LMS (RZALMS). The ZALMS is derived via combining a ℓ1 norm penalty on the coefficients into the quadratic LMS cost function, which generates a zero attractor in the LMS iteration. The zero attractor promotes sparsity in taps during the filtering process, and therefore accelerates convergence when identifying sparse systems. We prove that the ZALMS can achieve lower mean square error than the standard LMS. To further improve the filtering performance, the RZALMS is developed using a reweighted zero attractor. The performance of the RZALMS is superior to that of the ZALMS numerically. Experiments demonstrate the advantages of the proposed filters in both convergence rate and steadystate behaviors under sparsity assumptions on the true coefficient vector. The RZALMS is also shown to be robust when the number of nonzero taps increases. Index Terms — LMS, compressive sensing, sparse models, zeroattracting, l1 norm relaxation 1.
SparseNet: Coordinate Descent with NonConvex Penalties
, 2009
"... We address the problem of sparse selection in linear models. A number of nonconvex penalties have been proposed for this purpose, along with a variety of convexrelaxation algorithms for finding good solutions. In this paper we pursue the coordinatedescent approach for optimization, and study its ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
We address the problem of sparse selection in linear models. A number of nonconvex penalties have been proposed for this purpose, along with a variety of convexrelaxation algorithms for finding good solutions. In this paper we pursue the coordinatedescent approach for optimization, and study its convergence properties. We characterize the properties of penalties suitable for this approach, study their corresponding threshold functions, and describe a dfstandardizing reparametrization that assists our pathwise algorithm. The MC+ penalty (Zhang 2010) is ideally suited to this task, and we use it to demonstrate the performance of our algorithm. 1
Nonnegative mixednorm preconditioning for microscopy image segmentation
 Proc. Int. Conf. Information Processing in Med. Imaging
, 2009
"... Abstract. Image segmentation in microscopy, especially in interferencebased optical microscopy modalities, is notoriously challenging due to inherent optical artifacts. We propose a general algebraic framework for preconditioning microscopy images. It transforms an image that is unsuitable for direc ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
Abstract. Image segmentation in microscopy, especially in interferencebased optical microscopy modalities, is notoriously challenging due to inherent optical artifacts. We propose a general algebraic framework for preconditioning microscopy images. It transforms an image that is unsuitable for direct analysis into an image that can be effortlessly segmented using global thresholding. We formulate preconditioning as the minimization of nonnegativeconstrained convex objective functions with smoothness and sparsenesspromoting regularization. We propose efficient numerical algorithms for optimizing the objective functions. The algorithms were extensively validated on simulated differential interference (DIC) microscopy images and challenging real DIC images of cell populations. With preconditioning, we achieved unprecedented segmentation accuracy of 97.9 % for CNS stem cells, and 93.4 % for human red blood cells in challenging images. 1