Results 1  10
of
295
Matrix completion from noisy entries
 Journal of Machine Learning Research
"... Abstract Given a matrix M of lowrank, we consider the problem of reconstructing it from noisy observations of a small, random subset of its entries. The problem arises in a variety of applications, from collaborative filtering (the 'Netflix problem') to structurefrommotion and position ..."
Abstract

Cited by 124 (8 self)
 Add to MetaCart
(Show Context)
Abstract Given a matrix M of lowrank, we consider the problem of reconstructing it from noisy observations of a small, random subset of its entries. The problem arises in a variety of applications, from collaborative filtering (the 'Netflix problem') to structurefrommotion and positioning. We study a low complexity algorithm introduced in [1], based on a combination of spectral techniques and manifold optimization, that we call here OPTSPACE. We prove performance guarantees that are orderoptimal in a number of circumstances.
A tensorbased algorithm for highorder graph matching
 In CVPR
, 2009
"... Abstract—This paper addresses the problem of establishing correspondences between two sets of visual features using higherorder constraints instead of the unary or pairwise ones used in classical methods. Concretely, the corresponding hypergraph matching problem is formulated as the maximization of ..."
Abstract

Cited by 84 (3 self)
 Add to MetaCart
(Show Context)
Abstract—This paper addresses the problem of establishing correspondences between two sets of visual features using higherorder constraints instead of the unary or pairwise ones used in classical methods. Concretely, the corresponding hypergraph matching problem is formulated as the maximization of a multilinear objective function over all permutations of the features. This function is defined by a tensor representing the affinity between feature tuples. It is maximized using a generalization of spectral techniques where a relaxed problem is first solved by a multidimensional power method, and the solution is then projected onto the closest assignment matrix. The proposed approach has been implemented, and it is compared to stateoftheart algorithms on both synthetic and real data.
Statistical Computations on Grassmann and Stiefel manifolds for Image and VideoBased Recognition
, 2010
"... In this paper, we examine image and video based recognition applications where the underlying models have a special structure – the linear subspace structure. We discuss how commonly used parametric models for videos and imagesets can be described using the unified framework of Grassmann and Stiefe ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
(Show Context)
In this paper, we examine image and video based recognition applications where the underlying models have a special structure – the linear subspace structure. We discuss how commonly used parametric models for videos and imagesets can be described using the unified framework of Grassmann and Stiefel manifolds. We first show that the parameters of linear dynamic models are finite dimensional linear subspaces of appropriate dimensions. Unordered imagesets as samples from a finitedimensional linear subspace naturally fall under this framework. We show that the study of inference over subspaces can be naturally cast as an inference problem on the Grassmann manifold. To perform recognition using subspacebased models, we need tools from the Riemannian geometry of the Grassmann manifold. This involves a study of the geometric properties of the space, appropriate definitions of Riemannian metrics, and definition of geodesics. Further, we derive statistical modeling of inter and intraclass variations that respect the geometry of the space. We apply techniques such as intrinsic and extrinsic statistics, to enable maximumlikelihood classification. We also provide algorithms for unsupervised clustering derived from the geometry of the manifold. Finally, we demonstrate the improved performance of these methods in a wide variety of vision applications such as activity A preliminary version of this paper appeared in [1].
CONSENSUS OPTIMIZATION ON MANIFOLDS
 VOL. 48, NO. 1, PP. 56–76 C ○ 2009 SOCIETY FOR INDUSTRIAL AND APPLIED MATHEMATICS
, 2009
"... The present paper considers distributed consensus algorithms that involve N agents evolving on a connected compact homogeneous manifold. The agents track no external reference and communicate their relative state according to a communication graph. The consensus problem is formulated in terms of th ..."
Abstract

Cited by 41 (8 self)
 Add to MetaCart
The present paper considers distributed consensus algorithms that involve N agents evolving on a connected compact homogeneous manifold. The agents track no external reference and communicate their relative state according to a communication graph. The consensus problem is formulated in terms of the extrema of a cost function. This leads to efficient gradient algorithms to synchronize (i.e., maximizing the consensus) or balance (i.e., minimizing the consensus) the agents; a convenient adaptation of the gradient algorithms is used when the communication graph is directed and timevarying. The cost function is linked to a specific centroid definition on manifolds, introduced here as the induced arithmetic mean, that is easily computable in closed form and may be of independent interest for a number of manifolds. The special orthogonal group SO(n) andthe Grassmann manifold Grass(p, n) are treated as original examples. A link is also drawn with the many existing results on the circle.
A feasible method for optimization with orthogonality constraints
 In Rice Univ. Technical Report’10. 4
"... Abstract. Minimization with orthogonality constraints (e.g., X⊤X = I) and/or spherical constraints (e.g., ‖x‖2 = 1) has wide applications in polynomial optimization, combinatorial optimization, eigenvalue problems, sparse PCA, pharmonic flows, 1bit compressive sensing, matrix rank minimization, et ..."
Abstract

Cited by 40 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Minimization with orthogonality constraints (e.g., X⊤X = I) and/or spherical constraints (e.g., ‖x‖2 = 1) has wide applications in polynomial optimization, combinatorial optimization, eigenvalue problems, sparse PCA, pharmonic flows, 1bit compressive sensing, matrix rank minimization, etc. These problems are difficult because the constraints are not only nonconvex but numerically expensive to preserve during iterations. To deal with these difficulties, we propose to use a CrankNicolsonlike update scheme to preserve the constraints and based on it, develop curvilinear search algorithms with lower periteration cost compared to those based on projections and geodesics. The efficiency of the proposed algorithms is demonstrated on a variety of test problems. In particular, for the maxcut problem, it exactly solves a decomposition formulation for the SDP relaxation. For polynomial optimization, nearest correlation matrix estimation and extreme eigenvalue problems, the proposed algorithms run very fast and return solutions no worse than those from their stateoftheart algorithms. For the quadratic assignment problem, a gap 0.842 % to the best known solution on the largest problem “tai256c ” in QAPLIB can be reached in 5 minutes on a typical laptop.
Lowrank matrix completion by riemannian optimization
 ANCHPMATHICSE, Mathematics Section, École Polytechnique Fédérale de
"... The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices. The algorit ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
(Show Context)
The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices. The algorithm is an adaptation of classical nonlinear conjugate gradients, developed within the framework of retractionbased optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this lowrank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Additionally, we derive secondorder models that can be used in Newton’s method based on approximating the exponential map on this manifold to second order. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach scales very well for largescale problems and compares favorable with the stateoftheart, while outperforming most existing solvers. 1
LOWRANK OPTIMIZATION ON THE CONE OF POSITIVE SEMIDEFINITE MATRICES ∗
"... Abstract. We propose an algorithm for solving optimization problems defined on a subset of the cone of symmetric positive semidefinite matrices. This algorithm relies on the factorization X = YYT, where the number of columns of Y fixes an upper bound on the rank of the positive semidefinite matrix X ..."
Abstract

Cited by 31 (6 self)
 Add to MetaCart
(Show Context)
Abstract. We propose an algorithm for solving optimization problems defined on a subset of the cone of symmetric positive semidefinite matrices. This algorithm relies on the factorization X = YYT, where the number of columns of Y fixes an upper bound on the rank of the positive semidefinite matrix X. It is thus very effective for solving problems that have a lowrank solution. The factorization X = YYT leads to a reformulation of the original problem as an optimization on a particular quotient manifold. The present paper discusses the geometry of that manifold and derives a secondorder optimization method with guaranteed quadratic convergence. It furthermore provides some conditions on the rank of the factorization to ensure equivalence with the original problem. In contrast to existing methods, the proposed algorithm converges monotonically to the sought solution. Its numerical efficiency is evaluated on two applications: the maximal cut of a graph and the problem of sparse principal component analysis. Key words. lowrank constraints, cone of symmetric positive definite matrices, Riemannian quotient manifold, sparse principal component analysis, maximumcut algorithms, largescale algorithms