Results 1  10
of
77
Robust Principal Component Analysis for Computer Vision
, 2001
"... Principal Component Analysis (PCA) has been widely used for the representation of shape, appearance, and motion. One drawback of typical PCA methods is that they are least squares estimation techniques and hence fail to account for "outliers" which are common in realistic training sets. In computer ..."
Abstract

Cited by 95 (3 self)
 Add to MetaCart
Principal Component Analysis (PCA) has been widely used for the representation of shape, appearance, and motion. One drawback of typical PCA methods is that they are least squares estimation techniques and hence fail to account for "outliers" which are common in realistic training sets. In computer vision applications, outliers typically occur within a sample (image) due to pixels that are corrupted by noise, alignment errors, or occlusion. We review previous approaches for making PCA robust to outliers and present a new method that uses an intrasample outlier process to account for pixel outliers. We develop the theory of Robust Principal Component Analysis (RPCA) and describe a robust Mestimation algorithm for learning linear multivariate representations of high dimensional data such as images. Quantitative comparisons with traditional PCA and previous robust algorithms illustrate the benefits of RPCA when outliers are present. Details of the algorithm are described and a software implementation is being made publically available.
A Survey of Dimension Reduction Techniques
, 2002
"... this paper, we assume that we have n observations, each being a realization of the p dimensional random variable x = (x 1 , . . . , x p ) with mean E(x) = = ( 1 , . . . , p ) and covariance matrix E{(x )(x = # pp . We denote such an observation matrix by X = i,j : 1 p, 1 ..."
Abstract

Cited by 88 (0 self)
 Add to MetaCart
this paper, we assume that we have n observations, each being a realization of the p dimensional random variable x = (x 1 , . . . , x p ) with mean E(x) = = ( 1 , . . . , p ) and covariance matrix E{(x )(x = # pp . We denote such an observation matrix by X = i,j : 1 p, 1 n}. If i and # i = # (i,i) denote the mean and the standard deviation of the ith random variable, respectively, then we will often standardize the observations x i,j by (x i,j i )/ # i , where i = x i = 1/n j=1 x i,j , and # i = 1/n j=1 (x i,j x i )
Streaming Pattern Discovery in Multiple TimeSeries
 In VLDB
, 2005
"... In this paper, we introduce SPIRIT (Streaming Pattern dIscoveRy in multIple Timeseries) . Given n numerical data streams, all of whose values we observe at each time tick t, SPIRIT can incrementally find correlations and hidden variables, which summarise the key trends in the entire stream col ..."
Abstract

Cited by 68 (15 self)
 Add to MetaCart
In this paper, we introduce SPIRIT (Streaming Pattern dIscoveRy in multIple Timeseries) . Given n numerical data streams, all of whose values we observe at each time tick t, SPIRIT can incrementally find correlations and hidden variables, which summarise the key trends in the entire stream collection.
Evolutionary Pursuit and Its Application to Face Recognition
 IEEE TRANS. PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2000
"... This paper introduces Evolutionary Pursuit (EP) as a novel and adaptive representation method for image encoding and classification. In analogy to projection pursuit methods, EP seeks to learn an optimal basis for the dual purpose of data compression and pattern classification. The challenge for EP ..."
Abstract

Cited by 68 (10 self)
 Add to MetaCart
This paper introduces Evolutionary Pursuit (EP) as a novel and adaptive representation method for image encoding and classification. In analogy to projection pursuit methods, EP seeks to learn an optimal basis for the dual purpose of data compression and pattern classification. The challenge for EP is to increase the generalization ability of the learning machine as a result of seeking the tradeoff between minimizing the empirical risk encountered during training and narrowing the confidence interval for reducing the guaranteed risk during future testing on unseen images. Towards that end, EP implements strategies characteristic of genetic algorithms (GAs) for searching the space of possible solutions to determine the optimal basis. EP starts by projecting the original data into a lower dimensional whitened Principal Component Analysis (PCA) space. Directed but random rotations of the basis vectors in this space are then searched by GAs where evolution is driven by a fitness function defined in terms of performance accuracy (`empirical risk') and class separation (`confidence interval'). Accuracy indicates the extent to which learning has been successful so far, while separation gives an indication of the expected fitness on future trials. The feasibility of the new method has been successfully tested on face recognition where the large number of possible bases requires some type of greedy search algorithm. The particular face recognition task involves 1,107 FERET frontal face images corre
Robust Parameterized Component Analysis: Theory and Applications to 2D Facial Modeling
 Computer Vision and Image Understanding, 91:53 – 71
, 2002
"... Principal Component Analysis (PCA) has been successfully applied to construct linear models of shape, graylevel, and motion. In particular, PCA has been widely used to model the variation in the appearance of people's faces. We extend previous work on facial modeling for tracking faces in video sequ ..."
Abstract

Cited by 41 (7 self)
 Add to MetaCart
Principal Component Analysis (PCA) has been successfully applied to construct linear models of shape, graylevel, and motion. In particular, PCA has been widely used to model the variation in the appearance of people's faces. We extend previous work on facial modeling for tracking faces in video sequences as they undergo significant changes due to facial expressions. Here we develop personspecific facial appearance models (PSFAM), which use modular PCA to model complex intraperson appearance changes. Such models require aligned visual training data; in previous work, this has involved a time consuming and errorprone hand alignment and cropping process. Instead, we introduce parameterized component analysis to learn a subspace that is invariant to affine (or higher order) geometric transformations. The automatic learning of a PSFAM given a training image sequence is posed as a continuous optimization problem and is solved with a mixture of stochastic and deterministic techniques achieving subpixel accuracy.
A review of dimension reduction techniques
, 1997
"... The problem of dimension reduction is introduced as a way to overcome the curse of the dimensionality when dealing with vector data in highdimensional spaces and as a modelling tool for such data. It is defined as the search for a lowdimensional manifold that embeds the highdimensional data. A cl ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
The problem of dimension reduction is introduced as a way to overcome the curse of the dimensionality when dealing with vector data in highdimensional spaces and as a modelling tool for such data. It is defined as the search for a lowdimensional manifold that embeds the highdimensional data. A classification of dimension reduction problems is proposed. A survey of several techniques for dimension reduction is given, including principal component analysis, projection pursuit and projection pursuit regression, principal curves and methods based on topologically continuous maps, such as Kohonen’s maps or the generalised topographic mapping. Neural network implementations for several of these techniques are also reviewed, such as the projection pursuit learning network and the BCM neuron with an objective function. Several appendices complement the mathematical treatment of the main text.
Principal Component Analysis
 (IN PRESS, 2010). WILEY INTERDISCIPLINARY REVIEWS: COMPUTATIONAL STATISTICS, 2
, 2010
"... Principal component analysis (pca) is a multivariate technique that analyzes a data table in which observations are described by several intercorrelated quantitative dependent variables. Its goal is to extract the important information from the table, to represent it as a set of new orthogonal var ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
Principal component analysis (pca) is a multivariate technique that analyzes a data table in which observations are described by several intercorrelated quantitative dependent variables. Its goal is to extract the important information from the table, to represent it as a set of new orthogonal variables called principal components, and to display the pattern of similarity of the observations and of the variables as points in maps. The quality of the pca model can be evaluated using crossvalidation techniques such as the bootstrap and the jackknife. Pca can be generalized as correspondence analysis (ca) in order to handle qualitative variables and as multiple factor analysis (mfa) in order to handle heterogenous sets of variables. Mathematically, pca depends upon the eigendecomposition of positive semidefinite matrices and upon the singular value decomposition (svd) of rectangular matrices.
Dynamic Coupled Component Analysis
, 2001
"... We present a method for simultaneously learning linear models of multiple high dimensional data sets and the dependencies between them. For example, we learn asymmetrically coupled linear models for the faces of two dierent people and show how these models can be used to animate one face given a vid ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
We present a method for simultaneously learning linear models of multiple high dimensional data sets and the dependencies between them. For example, we learn asymmetrically coupled linear models for the faces of two dierent people and show how these models can be used to animate one face given a video sequence of the other. We pose the problem as a form of Asymmetric Coupled Component Analysis (ACCA) in which we simultaneously learn the subspaces for reducing the dimensionality of each dataset while coupling the parameters of the low dimensional representations.
Generalizable Patterns in Neuroimaging: How Many Principal Components?
 NeuroImage
, 1999
"... Generalization can be defined quantitatively and can be used to assess the performance of Principal Component Analysis (PCA). The generalizability of PCA depends on the number of principal components retained in the analysis. We provide analytic and test set estimates of generalization. We show how ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
Generalization can be defined quantitatively and can be used to assess the performance of Principal Component Analysis (PCA). The generalizability of PCA depends on the number of principal components retained in the analysis. We provide analytic and test set estimates of generalization. We show how the generalization error can be used to select the number of principal components in two analyses of functional Magnetic Resonance Imaging activation sets. 1 Introduction Principal Component Analysis (PCA) and the closely related Singular Value Decomposition (SVD) technique are popular tools for analysis of image databases and are actively investigated in functional neuroimaging [Moeller & Strother 91, Friston et al. 93, Lautrup et al. 95, Strother et al. 95, Ardekani et al. 98, Worsley et al. 97]. By PCA the image database is decomposed in terms of orthogonal "eigenimages" that may lend themselves to direct interpretation. The principal components  the projections of the image data ont...
Fast Dimensionality Reduction and Simple PCA
, 1997
"... A fast and simple algorithm for approximately calculating the principal components (PCs) of a data set and so reducing its dimensionality is described. This Simple Principal Components Analysis (SPCA) method was used for dimensionality reduction of two highdimensional image databases, one of handwr ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
A fast and simple algorithm for approximately calculating the principal components (PCs) of a data set and so reducing its dimensionality is described. This Simple Principal Components Analysis (SPCA) method was used for dimensionality reduction of two highdimensional image databases, one of handwritten digits and one of handwritten Japanese characters. It was tested and compared with other techniques. On both databases SPCA shows a fast convergence rate compared with other methods and robustness to the reordering of the samples. KEYWORDS: Principal component analysis, matrix diagonalization, Hebbian learning, image compression. All correspondance should be addressed to this author. y Permanent address: Instituto de Fisica Rosario, Bvd. 27 de Febrero 210 Bis, 2000 Rosario, Argentina. 1 Introduction High dimensional data analysis is becoming increasingly common as new problems are placing greater demands on computing resources. With high dimensional data, it is difficult to unde...