Results 11  20
of
36
Rank minimization and applications in system theory
 In American Control Conference
, 2004
"... AbstractIn this tutorial paper, we consider the problem Of minimizing the rank of a matrix over a convex set. The Rank Minimization Problem (RMP) arises in diverse areas such as control, system identification, statistics and signal processing, and is known to be computationally NPhard. We give an ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
AbstractIn this tutorial paper, we consider the problem Of minimizing the rank of a matrix over a convex set. The Rank Minimization Problem (RMP) arises in diverse areas such as control, system identification, statistics and signal processing, and is known to be computationally NPhard. We give an overview of the problem, its interpretations, applications, and solution methods. In particular, we focus on how convex optimization can he used to develop heuristic methods for this problem.
Null Space Conditions and Thresholds for Rank Minimization
, 2009
"... Minimizing the rank of a matrix subject to constraints is a challenging problem that arises in many applications in machine learning, control theory, and discrete geometry. This class of optimization problems, known as rank minimization, is NPHARD, and for most practical problems there are no effic ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
Minimizing the rank of a matrix subject to constraints is a challenging problem that arises in many applications in machine learning, control theory, and discrete geometry. This class of optimization problems, known as rank minimization, is NPHARD, and for most practical problems there are no efficient algorithms that yield exact solutions. A popular heuristic replaces the rank function with the nuclear norm—equal to the sum of the singular values—of the decision variable and has been shown to provide the optimal low rank solution in a variety of scenarios. In this paper, we assess the practical performance of this heuristic for finding the minimum rank matrix subject to linear constraints. Our starting point is the characterization of a necessary and sufficient condition that determines when this heuristic finds the minimum rank solution. We then obtain conditions, as a function of the matrix dimensions and rank and the number of constraints, such that our conditions for success are satisfied for almost all linear constraint sets as the matrix dimensions tend to infinity. Finally, we provide empirical evidence that these probabilistic bounds provide accurate predictions of the heuristic’s performance in nonasymptotic scenarios.
Necessary and Sufficient Conditions for Success of the Nuclear Norm Heuristic for Rank Minimization
, 2008
"... Minimizing the rank of a matrix subject to constraints is a challenging problem that arises in many applications in control theory, machine learning, and discrete geometry. This class of optimization problems, known as rank minimization, is NPHARD, and for most practical problems there are no effic ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Minimizing the rank of a matrix subject to constraints is a challenging problem that arises in many applications in control theory, machine learning, and discrete geometry. This class of optimization problems, known as rank minimization, is NPHARD, and for most practical problems there are no efficient algorithms that yield exact solutions. A popular heuristic algorithm replaces the rank function with the nuclear norm—equal to the sum of the singular values—of the decision variable. In this paper, we provide a necessary and sufficient condition that quantifies when this heuristic successfully finds the minimum rank solution of a linear constraint set. We additionally provide a probability distribution over instances of the affine rank minimization problem such that instances sampled from this distribution satisfy our conditions for success with overwhelming probability provided the number of constraints is appropriately large. Finally, we give empirical evidence that these probabilistic bounds provide accurate predictions of the heuristic’s performance in nonasymptotic scenarios.
Sparse and LowRank Matrix Decompositions
"... Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown lowrank matrix. Our goal is to decompose the given matrix into its sparse and lowrank components. Such a problem arises in a number of applications in model and system identification, but obtaining an ex ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown lowrank matrix. Our goal is to decompose the given matrix into its sparse and lowrank components. Such a problem arises in a number of applications in model and system identification, but obtaining an exact solution is NPhard in general. In this paper we consider a convex optimization formulation to splitting the specified matrix into its components; in fact our approach reduces to solving a semidefinite program. We provide sufficient conditions that guarantee exact recovery of the components by solving the semidefinite program. We also show that when the sparse and lowrank matrices are drawn from certain natural random ensembles, these sufficient conditions are satisfied with high probability. We conclude with simulation results on synthetic matrix decomposition problems.
Uniqueness of lowrank matrix completion by rigidity theory
, 2009
"... Abstract. The problem of completing a lowrank matrix from a subset of its entries is often encountered in the analysis of incomplete data sets exhibiting an underlying factor model with applications in collaborative filtering, computer vision and control. Most recent work had been focused on constr ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Abstract. The problem of completing a lowrank matrix from a subset of its entries is often encountered in the analysis of incomplete data sets exhibiting an underlying factor model with applications in collaborative filtering, computer vision and control. Most recent work had been focused on constructing efficient algorithms for exact or approximate recovery of the missing matrix entries and proving lower bounds for the number of known entries that guarantee a successful recovery with high probability. A related problem from both the mathematical and algorithmic point of view is the distance geometry problem of realizing points in a Euclidean space from a given subset of their pairwise distances. Rigidity theory answers basic questions regarding the uniqueness of the realization satisfying a given partial set of distances. We observe that basic ideas and tools of rigidity theory can be adapted to determine uniqueness of lowrank matrix completion, where inner products play the role that distances play in rigidity theory. This observation leads to an efficient randomized algorithm for testing both local and global unique completion. Crucial to our analysis is a new matrix, which we call the completion matrix, that serves as the analogue of the rigidity matrix. Key words. Low rank matrices, missing values, rigidity theory, rigid graphs, iterative methods.
On the SemiDefinite Programming Solution of the Least Order Dynamic Output Feedback Synthesis and Related Problems
 In Proc. American Control Conference
, 1998
"... We show that a semidefinite programming approach can be adopted to determine the least order dynamic output feedback which stabilizes a given linear time invariant plant. The problem addressed includes as a special case, the famous static output feedback problem. Keywords: Least Order Dynamic Outp ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We show that a semidefinite programming approach can be adopted to determine the least order dynamic output feedback which stabilizes a given linear time invariant plant. The problem addressed includes as a special case, the famous static output feedback problem. Keywords: Least Order Dynamic Output Feedback; Static Output Feedback; SemiDefinite Programming; Polynomialtime Algorithms 1 Introduction Consider the linear time invariant (LTI) plant \Sigma, \Sigma : x = Ax +Bu; (1.1) y = Cx; (1.2) where A 2 R n\Thetan , B 2 R n\Thetam , and C 2 R p\Thetan . Let k n; represent the class of kth order stabilizing 1 linear controllers for \Sigma by \Sigma k c , which have the general form, \Sigma k c : z = AK z +BK y; (1.3) Draft 3.1. I am planning to post the next revision of the paper in mid March after I get the reviews back. 1 All references to stability are in the sense of Lyapunov: the origin is the stable equilibrium point of the dynamical system x = Ax if and o...
Array imaging using intensityonly measurements
, 2010
"... Abstract. We introduce a new approach for narrow band array imaging of localized scatterers from intensityonly measurements by considering the possibility of reconstructing the positions and reflectivities of the scatterers exactly from only partial knowledge of the array data, since we assume that ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. We introduce a new approach for narrow band array imaging of localized scatterers from intensityonly measurements by considering the possibility of reconstructing the positions and reflectivities of the scatterers exactly from only partial knowledge of the array data, since we assume that phase information is not available. We reformulate this intensityonly imaging problem as a nonconvex optimization problem and show that we can have exact recovery by minimizing the rank of a positive semidefinite matrix associated with the unknown reflectivities. Since this optimization problem is NPhard and is computationally intractable, we replace the rank of the matrix by its nuclear norm, the sum of its singular values, which is a convex programming problem that can be solved in polynomial time. We show that under certain conditions on the array imaging setup and on the scatterer configuration the minimum nuclear norm solution coincides with the minimum rank solution. Numerical experiments explore the robustness of this approach, which recovers sparse reflectivity vectors exactly as solutions of a convex optimization problem. Array imaging using intensityonly measurements 2 1.
ChannelOptimized Quantum Error Correction
, 2010
"... We develop a theory for finding quantum error correction (QEC) procedures which are optimized for given noise channels. Our theory accounts for uncertainties in the noise channel, against which our QEC procedures are robust. We demonstrate, via numerical examples, that our optimized QEC procedures a ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We develop a theory for finding quantum error correction (QEC) procedures which are optimized for given noise channels. Our theory accounts for uncertainties in the noise channel, against which our QEC procedures are robust. We demonstrate, via numerical examples, that our optimized QEC procedures always achieve a higher channel fidelity than the standard error correction method, which is agnostic about the specifics of the channel. This demonstrates the importance of channel characterization before QEC procedures are applied. Our main novel finding is that in the setting of a known noise channel the recovery ancillas are redundant for optimized quantum error correction. We show this using a general rank minimization heuristic and supporting numerical calculations. Therefore, one can further improve the fidelity by utilizing all the available ancillas in the encoding block.
SGOODNESS FOR LOWRANK MATRIX RECOVERY, TRANSLATED FROM SPARSE SIGNAL RECOVERY
"... Abstract. Lowrank matrix recovery (LMR) is a rank minimization problem subject to linear equality constraints, and it arises in many fields such as signal and image processing, statistics, computer vision, system identification and control. This class of optimization problems is generally N Phard. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. Lowrank matrix recovery (LMR) is a rank minimization problem subject to linear equality constraints, and it arises in many fields such as signal and image processing, statistics, computer vision, system identification and control. This class of optimization problems is generally N Phard. A popular approach replaces the rank function with the nuclear norm of the matrix variable. In this paper, we extend and characterize the concept of sgoodness for a sensing matrix in sparse signal recovery (proposed by Juditsky and Nemirovski [Math Program, 2011]) to linear transformations in LMR. Utilizing the two characteristic sgoodness constants, γs and ˆγs, of a linear transformation, we derive necessary and sufficient conditions for a linear transformation to be sgood. Moreover, we establish the equivalence of sgoodness and the null space properties. Therefore, sgoodness is a necessary and sufficient condition for exact srank matrix recovery via the nuclear norm minimization. 1.
A RankCorrected Procedure for Matrix Completion with Fixed Basis Coefficients
, 2012
"... In this paper, we address lowrank matrix completion problems with fixed basis coefficients, which include the lowrank correlation matrix completion in various fields such as the financial market and the lowrank density matrix completion from the quantum state tomography. For this class of problem ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, we address lowrank matrix completion problems with fixed basis coefficients, which include the lowrank correlation matrix completion in various fields such as the financial market and the lowrank density matrix completion from the quantum state tomography. For this class of problems, the efficiency of the common nuclear norm penalized estimator for recovery may be challenged. Here, with a reasonable initial estimator, we propose a rankcorrected procedure to generate an estimator of high accuracy and low rank. For this new estimator, we establish a nonasymptotic recovery error bound and analyze the impact of adding the rankcorrection term on improving the recoverability. We also provide necessary and sufficient conditions for rank consistency in the sense of Bach [3], in which the concept of constraint nondegeneracy in matrix optimization plays an important role. As a byproduct, our results provide a theoretical foundation for the majorized penalty method of Gao and Sun [25] and Gao [24] for structured lowrank matrix optimization problems.