Results 1  10
of
245
Guaranteed minimumrank solutions of linear matrix equations via nuclear norm minimization,”
 SIAM Review,
, 2010
"... Abstract The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and col ..."
Abstract

Cited by 562 (20 self)
 Add to MetaCart
(Show Context)
Abstract The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NPhard, because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is Ω(r(m + n) log mn), where m, n are the dimensions of the matrix, and r is its rank. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to solving the norm minimization relaxations, and illustrate our results with numerical examples.
Sparsest solutions of underdetermined linear systems via ℓ
"... We present a condition on the matrix of an underdetermined linear system which guarantees that the solution of the system with minimal ℓqquasinorm is also the sparsest one. This generalizes, and sightly improves, a similar result for the ℓ1norm. We then introduce a simple numerical scheme to compu ..."
Abstract

Cited by 192 (11 self)
 Add to MetaCart
(Show Context)
We present a condition on the matrix of an underdetermined linear system which guarantees that the solution of the system with minimal ℓqquasinorm is also the sparsest one. This generalizes, and sightly improves, a similar result for the ℓ1norm. We then introduce a simple numerical scheme to compute solutions with minimal ℓqquasinorm, and we study its convergence. Finally, we display the results of some experiments which indicate that the ℓqmethod performs better than other available methods. 1
Sparse Representation For Computer Vision and Pattern Recognition
, 2009
"... Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on nontraditional applications where the goal is not just to obtain a compact highfidelity representation of the observed signal, but also to extract semantic information. The choice of ..."
Abstract

Cited by 146 (9 self)
 Add to MetaCart
(Show Context)
Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on nontraditional applications where the goal is not just to obtain a compact highfidelity representation of the observed signal, but also to extract semantic information. The choice of dictionary plays a key role in bridging this gap: unconventional dictionaries consisting of, or learned from, the training samples themselves provide the key to obtaining stateoftheart results and to attaching semantic meaning to sparse signal representations. Understanding the good performance of such unconventional dictionaries in turn demands new algorithmic and analytical techniques. This review paper highlights a few representative examples of how the interaction between sparse signal representation and computer vision can enrich both fields, and raises a number of open questions for further study.
Phase Retrieval via Matrix Completion
, 2011
"... This paper develops a novel framework for phase retrieval, a problem which arises in Xray crystallography, diffraction imaging, astronomical imaging and many other applications. Our approach, called PhaseLift, combines multiple structured illuminations together with ideas from convex programming to ..."
Abstract

Cited by 75 (11 self)
 Add to MetaCart
This paper develops a novel framework for phase retrieval, a problem which arises in Xray crystallography, diffraction imaging, astronomical imaging and many other applications. Our approach, called PhaseLift, combines multiple structured illuminations together with ideas from convex programming to recover the phase from intensity measurements, typically from the modulus of the diffracted wave. We demonstrate empirically that any complexvalued object can be recovered from the knowledge of the magnitude of just a few diffracted patterns by solving a simple convex optimization problem inspired by the recent literature on matrix completion. More importantly, we also demonstrate that our noiseaware algorithms are stable in the sense that the reconstruction degrades gracefully as the signaltonoise ratio decreases. Finally, we introduce some theory showing that one can design very simple structured illumination patterns such that three diffracted figures uniquely determine the phase of the object we wish to recover.
SparseNet: Coordinate Descent with NonConvex Penalties
, 2009
"... We address the problem of sparse selection in linear models. A number of nonconvex penalties have been proposed for this purpose, along with a variety of convexrelaxation algorithms for finding good solutions. In this paper we pursue the coordinatedescent approach for optimization, and study its ..."
Abstract

Cited by 71 (0 self)
 Add to MetaCart
(Show Context)
We address the problem of sparse selection in linear models. A number of nonconvex penalties have been proposed for this purpose, along with a variety of convexrelaxation algorithms for finding good solutions. In this paper we pursue the coordinatedescent approach for optimization, and study its convergence properties. We characterize the properties of penalties suitable for this approach, study their corresponding threshold functions, and describe a dfstandardizing reparametrization that assists our pathwise algorithm. The MC+ penalty (Zhang 2010) is ideally suited to this task, and we use it to demonstrate the performance of our algorithm. 1
Analysis of multistage convex relaxation for sparse regularization
 Journal of Machine Learning Research
"... We consider learning formulations with nonconvex objective functions that often occur in practical applications. There are two approaches to this problem: • Heuristic methods such as gradient descent that only find a local minimum. A drawback of this approach is the lack of theoretical guarantee sh ..."
Abstract

Cited by 62 (7 self)
 Add to MetaCart
We consider learning formulations with nonconvex objective functions that often occur in practical applications. There are two approaches to this problem: • Heuristic methods such as gradient descent that only find a local minimum. A drawback of this approach is the lack of theoretical guarantee showing that the local minimum gives a good solution. • Convex relaxation such as L1regularization that solves the problem under some conditions. However it often leads to a suboptimal solution in reality. This paper tries to remedy the above gap between theory and practice. In particular, we present a multistage convex relaxation scheme for solving problems with nonconvex objective functions. For learning formulations with sparse regularization, we analyze the behavior of a specific multistage relaxation scheme. Under appropriate conditions, we show that the local solution obtained by this procedure is superior to the global solution of the standard L1 convex relaxation for learning sparse targets.
Sparse LMS for system identification
 in Proc. IEEE ICASSP
, 2009
"... We propose a new approach to adaptive system identification when the system model is sparse. The approach applies the ℓ1 relaxation, common in compressive sensing, to improve the performance of LMStype adaptive methods. This results in two new algorithms, the ZeroAttracting LMS (ZALMS) and the Re ..."
Abstract

Cited by 49 (6 self)
 Add to MetaCart
(Show Context)
We propose a new approach to adaptive system identification when the system model is sparse. The approach applies the ℓ1 relaxation, common in compressive sensing, to improve the performance of LMStype adaptive methods. This results in two new algorithms, the ZeroAttracting LMS (ZALMS) and the Reweighted ZeroAttracting LMS (RZALMS). The ZALMS is derived via combining a ℓ1 norm penalty on the coefficients into the quadratic LMS cost function, which generates a zero attractor in the LMS iteration. The zero attractor promotes sparsity in taps during the filtering process, and therefore accelerates convergence when identifying sparse systems. We prove that the ZALMS can achieve lower mean square error than the standard LMS. To further improve the filtering performance, the RZALMS is developed using a reweighted zero attractor. The performance of the RZALMS is superior to that of the ZALMS numerically. Experiments demonstrate the advantages of the proposed filters in both convergence rate and steadystate behaviors under sparsity assumptions on the true coefficient vector. The RZALMS is also shown to be robust when the number of nonzero taps increases. Index Terms — LMS, compressive sensing, sparse models, zeroattracting, l1 norm relaxation 1.
Various thresholds for ℓ1optimization in compressed sensing
, 2009
"... Recently, [14, 28] theoretically analyzed the success of a polynomial ℓ1optimization algorithm in solving an underdetermined system of linear equations. In a large dimensional and statistical context [14, 28] proved that if the number of equations (measurements in the compressed sensing terminolog ..."
Abstract

Cited by 33 (17 self)
 Add to MetaCart
Recently, [14, 28] theoretically analyzed the success of a polynomial ℓ1optimization algorithm in solving an underdetermined system of linear equations. In a large dimensional and statistical context [14, 28] proved that if the number of equations (measurements in the compressed sensing terminology) in the system is proportional to the length of the unknown vector then there is a sparsity (number of nonzero elements of the unknown vector) also proportional to the length of the unknown vector such that ℓ1optimization succeeds in solving the system. In this paper, we provide an alternative performance analysis of ℓ1optimization and obtain the proportionality constants that in certain cases match or improve on the best currently known ones from [28, 29].
On Security Indices for State Estimators in Power Networks
"... In this paper, we study stealthy falsedata attacks against state estimators in power networks. The focus is on applications in SCADA (Supervisory Control and Data Acquisition) systems where measurement data is corrupted by a malicious attacker. We introduce two security indices for the state estima ..."
Abstract

Cited by 30 (7 self)
 Add to MetaCart
In this paper, we study stealthy falsedata attacks against state estimators in power networks. The focus is on applications in SCADA (Supervisory Control and Data Acquisition) systems where measurement data is corrupted by a malicious attacker. We introduce two security indices for the state estimators. The indices quantify the least effort needed to achieve attack goals while avoiding baddata alarms in the power network control center (stealthy attacks). The indices depend on the physical topology of the power network and the available measurements, and can help the system operator to identify sparse data manipulation patterns. This information can be used to strengthen the security by allocating encryption devices, for example. The analysis is also complemented with a convex optimization framework that can be used to evaluate more complex attacks taking model deviations and multiple attack goals into account. The security indices are finally computed in an example. It is seen that a large measurement redundancy forces the attacker to use large magnitudes in the data manipulation pattern, but that the pattern still can be relatively sparse.