Results 1  10
of
324
Dynamic Texture Recognition
, 2001
"... Dynamic textures are sequences of images that exhibit some form of temporal stationarity, such as waves, steam, and foliage. We pose the problem of recognizing and classifying dynamic textures in the space of dynamical systems where each dynamic texture is uniquely represented. Since the space is no ..."
Abstract

Cited by 87 (6 self)
 Add to MetaCart
Dynamic textures are sequences of images that exhibit some form of temporal stationarity, such as waves, steam, and foliage. We pose the problem of recognizing and classifying dynamic textures in the space of dynamical systems where each dynamic texture is uniquely represented. Since the space is nonlinear, a distance between models must be defined. We examine three different distances in the space of autoregressive models and assess their power. 1.
Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method
 SIAM J. Sci. Comput
, 2001
"... We describe new algorithms of the locally optimal block preconditioned conjugate gradient (LOBPCG) method for symmetric eigenvalue problems, based on a local optimization of a threeterm recurrence, and suggest several other new methods. To be able to compare numerically different methods in the cla ..."
Abstract

Cited by 82 (12 self)
 Add to MetaCart
We describe new algorithms of the locally optimal block preconditioned conjugate gradient (LOBPCG) method for symmetric eigenvalue problems, based on a local optimization of a threeterm recurrence, and suggest several other new methods. To be able to compare numerically different methods in the class, with different preconditioners, we propose a common system of model tests, using random preconditioners and initial guesses. As the "ideal" control algorithm, we advocate the standard preconditioned conjugate gradient method for finding an eigenvector as an element of the nullspace of the corresponding homogeneous system of linear equations under the assumption that the eigenvalue is known. We recommend that every new preconditioned eigensolver be compared with this "ideal" algorithm on our model test problems in terms of the speed of convergence, costs of every iteration, and memory requirements. We provide such comparison for our LOBPCG method. Numerical results establish that our algorithm is practically as efficient as the "ideal" algorithm when the same preconditioner is used in both methods. We also show numerically that the LOBPCG method provides approximations to first eigenpairs of about the same quality as those by the much more expensive global optimization method on the same generalized block Krylov subspace. We propose a new version of block Davidson's method as a generalization of the LOBPCG method. Finally, direct numerical comparisons with the JacobiDavidson method show that our method is more robust and converges almost two times faster.
Matrix completion from a few entries
"... Let M be a random nα × n matrix of rank r ≪ n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from E  = O(r n) observed entries with relative root mean square error RMSE ≤ C(α) ..."
Abstract

Cited by 68 (5 self)
 Add to MetaCart
Let M be a random nα × n matrix of rank r ≪ n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from E  = O(r n) observed entries with relative root mean square error RMSE ≤ C(α)
Conditions for nonnegative independent component analysis
 IEEE Signal Processing Letters
, 2002
"... We consider the noiseless linear independent component analysis problem, in the case where the hidden sources s are nonnegative. We assume that the random variables s i s are wellgrounded in that they have a nonvanishing pdf in the (positive) neighbourhood of zero. For an orthonormal rotation y = ..."
Abstract

Cited by 63 (11 self)
 Add to MetaCart
We consider the noiseless linear independent component analysis problem, in the case where the hidden sources s are nonnegative. We assume that the random variables s i s are wellgrounded in that they have a nonvanishing pdf in the (positive) neighbourhood of zero. For an orthonormal rotation y = Wx of prewhitened observations x = QAs, under certain reasonable conditions we show that y is a permutation of the s (apart from a scaling factor) if and only if y is nonnegative with probability 1. We suggest that this may enable the construction of practical learning algorithms, particularly for sparse nonnegative sources.
Means and Averaging in the Group of Rotations
, 2002
"... In this paper we give precise definitions of different, properly invariant notions of mean or average rotation. Each mean is associated with a metric in SO(3). The metric induced from the Frobenius inner product gives rise to a mean rotation that is given by the closest special orthogonal matrix to ..."
Abstract

Cited by 63 (1 self)
 Add to MetaCart
In this paper we give precise definitions of different, properly invariant notions of mean or average rotation. Each mean is associated with a metric in SO(3). The metric induced from the Frobenius inner product gives rise to a mean rotation that is given by the closest special orthogonal matrix to the usual arithmetic mean of the given rotation matrices. The mean rotation associated with the intrinsic metric on SO(3) is the Riemannian center of mass of the given rotation matrices. We show that the Riemannian mean rotation shares many common features with the geometric mean of positive numbers and the geometric mean of positive Hermitian operators. We give some examples with closedform solutions of both notions of mean.
On Augmented Lagrangian methods with general lowerlevel constraints
, 2005
"... Augmented Lagrangian methods with general lowerlevel constraints are considered in the present research. These methods are useful when efficient algorithms exist for solving subproblems where the constraints are only of the lowerlevel type. Two methods of this class are introduced and analyzed. In ..."
Abstract

Cited by 59 (7 self)
 Add to MetaCart
Augmented Lagrangian methods with general lowerlevel constraints are considered in the present research. These methods are useful when efficient algorithms exist for solving subproblems where the constraints are only of the lowerlevel type. Two methods of this class are introduced and analyzed. Inexact resolution of the lowerlevel constrained subproblems is considered. Global convergence is proved using the Constant Positive Linear Dependence constraint qualification. Conditions for boundedness of the penalty parameters are discussed. The reliability of the approach is tested by means of an exhaustive comparison against Lancelot. All the problems of the Cute collection are used in this comparison. Moreover, the resolution of location problems in which many constraints of the lowerlevel set are nonlinear is addressed, employing the Spectral Projected Gradient method for solving the subproblems. Problems of this type with more than 3 × 10 6 variables and 14 × 10 6 constraints are solved in this way, using moderate computer time.
Generalized Low Rank Approximations of Matrices
 MACHINE LEARNING
, 2004
"... We consider the problem of computing low rank approximations of matrices. The novelty of our approach is that the low rank approximations are on a sequence of matrices. Unlike the ..."
Abstract

Cited by 56 (6 self)
 Add to MetaCart
We consider the problem of computing low rank approximations of matrices. The novelty of our approach is that the low rank approximations are on a sequence of matrices. Unlike the
On Lagrangian relaxation of quadratic matrix constraints
 SIAM J. Matrix Anal. Appl
, 2000
"... Abstract. Quadratically constrained quadratic programs (QQPs) play an important modeling role for many diverse problems. These problems are in general NP hard and numerically intractable. Lagrangian relaxations often provide good approximate solutions to these hard problems. Such relaxations are equ ..."
Abstract

Cited by 50 (19 self)
 Add to MetaCart
Abstract. Quadratically constrained quadratic programs (QQPs) play an important modeling role for many diverse problems. These problems are in general NP hard and numerically intractable. Lagrangian relaxations often provide good approximate solutions to these hard problems. Such relaxations are equivalent to semidefinite programming relaxations. For several special cases of QQP, e.g., convex programs and trust region subproblems, the Lagrangian relaxation provides the exact optimal value, i.e., there is a zero duality gap. However, this is not true for the general QQP, or even the QQP with two convex constraints, but a nonconvex objective. In this paper we consider a certain QQP where the quadratic constraints correspond to the matrix orthogonality condition XXT = I. For this problem we show that the Lagrangian dual based on relaxing the constraints XXT = I and the seemingly redundant constraints XT X = I has a zero duality gap. This result has natural applications to quadratic assignment and graph partitioning problems, as well as the problem of minimizing the weighted sum of the largest eigenvalues of a matrix. We also show that the technique of relaxing quadratic matrix constraints can be used to obtain a strengthened semidefinite relaxation for the maxcut problem. Key words. Lagrangian relaxations, quadratically constrained quadratic programs, semidefinite programming, quadratic assignment, graph partitioning, maxcut problems
Riemannian geometry of Grassmann manifolds with a view on algorithmic computation
 Acta Appl. Math
"... Abstract. We give simple formulas for the canonical metric, gradient, Lie ..."
Abstract

Cited by 49 (11 self)
 Add to MetaCart
Abstract. We give simple formulas for the canonical metric, gradient, Lie
Learning spectral clustering, with application to speech separation
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2006
"... Spectral clustering refers to a class of techniques which rely on the eigenstructure of a similarity matrix to partition points into disjoint clusters, with points in the same cluster having high similarity and points in different clusters having low similarity. In this paper, we derive new cost fun ..."
Abstract

Cited by 43 (5 self)
 Add to MetaCart
Spectral clustering refers to a class of techniques which rely on the eigenstructure of a similarity matrix to partition points into disjoint clusters, with points in the same cluster having high similarity and points in different clusters having low similarity. In this paper, we derive new cost functions for spectral clustering based on measures of error between a given partition and a solution of the spectral relaxation of a minimum normalized cut problem. Minimizing these cost functions with respect to the partition leads to new spectral clustering algorithms. Minimizing with respect to the similarity matrix leads to algorithms for learning the similarity matrix from fully labelled datasets. We apply our learning algorithm to the blind onemicrophone speech separation problem, casting the problem as one of segmentation of the spectrogram.