Results 1  10
of
47
Independent Component Analysis
 Neural Computing Surveys
, 2001
"... A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the ..."
Abstract

Cited by 1488 (93 self)
 Add to MetaCart
A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the original data. Wellknown linear transformation methods include, for example, principal component analysis, factor analysis, and projection pursuit. A recently developed linear transformation method is independent component analysis (ICA), in which the desired representation is the one that minimizes the statistical dependence of the components of the representation. Such a representation seems to capture the essential structure of the data in many applications. In this paper, we survey the existing theory and methods for ICA. 1
Principal Component Analysis based on Robust Estimators of the Covariance or Correlation Matrix: Influence Functions and Efficiencies
 BIOMETRIKA
, 2000
"... A robust principal component analysis can be easily performed by computing the eigenvalues and eigenvectors of a robust estimator of the covariance or correlation matrix. In this paper we derive the influence functions and the corresponding asymptotic variances for these robust estimators of eige ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
A robust principal component analysis can be easily performed by computing the eigenvalues and eigenvectors of a robust estimator of the covariance or correlation matrix. In this paper we derive the influence functions and the corresponding asymptotic variances for these robust estimators of eigenvalues and eigenvectors. The behavior of several of these estimators is investigated by a simulation study. Finally, the use of empirical influence functions is illustrated by a real data example.
Cosmic confusion: degeneracies among cosmological parameters derived from measurement of microwave background anisotropies
"... A number of investigations have shown that high precision measurements of the cosmic microwave background (CMB) anisotropies can be used to determine many cosmological parameters to unprecedented precision (Jungman et al. 1996; ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
A number of investigations have shown that high precision measurements of the cosmic microwave background (CMB) anisotropies can be used to determine many cosmological parameters to unprecedented precision (Jungman et al. 1996;
Learning Nonlinear Models of Shape and Motion
, 1999
"... Deformable models have been an active area of research in computer vision for a number of years. Their ability to model nonridgid objects through the combination of geometry and physics has proven a valuable tool in image processing. More recently a class of deformable objects known as Point Distri ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
Deformable models have been an active area of research in computer vision for a number of years. Their ability to model nonridgid objects through the combination of geometry and physics has proven a valuable tool in image processing. More recently a class of deformable objects known as Point Distribution Models or Eigen Models have been introduced. These statistical models of deformation overcome some of the shortfalls of earlier deformable models by learning what is 'allowable ' deformation, for an object class, from a training set of examples. This semiautomated learning procedure provides a more generic approach to object recognition, tracking and classification. Their strength lies in their simplicity and speed of operation, allowing the robust ability to model complex deformations in cluttered environments. However, the automated construction of such models leads to a breakdown of the fundamental assumptions upon which they are based. Primarily, that the underlying mathematical model is linear in nature. Furthermore, as more complex objects
ScaleInvariant Image Recognition Based On Higher Order Autocorrelation Features
 Pattern Recognition
, 1996
"... We propose a framework and a complete implementation of a translation and scale invariant image recognition system for natural indoor scenes. The system employs higher order autocorrelation features of scale space data which permit linear classification. An optimal linear classification method is pr ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
We propose a framework and a complete implementation of a translation and scale invariant image recognition system for natural indoor scenes. The system employs higher order autocorrelation features of scale space data which permit linear classification. An optimal linear classification method is presented, which is able to cope with a large number of classes represented by many, as well as very few samples. In the course of the analysis of our system, we examine which numerical methods for feature transformation and classification show sufficient stability to fulfill these demands. The implementation has been extensively tested. We present the results of our own application and several classification benchmarks. Image recognition Face recognition Scale invariancy Scale space Higher order autocorrelation Optimal linear classification 1. INTRODUCTION The task of visual recognition which was defined by Marr (1) with the question: "What objects are where in the environment?" is still ...
Methods for Enhancing Neural Network Handwritten Character Recognition
 International Joint Conference on Neural Networks
, 1991
"... An efficient method for increasing the generalization capacity of neural character recognition is presented. The network uses a biologically inspired architecture for feature extraction and character classification. The numerical methods used are, however, optimized for use on massively parallel arr ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
An efficient method for increasing the generalization capacity of neural character recognition is presented. The network uses a biologically inspired architecture for feature extraction and character classification. The numerical methods used are, however, optimized for use on massively parallel array processors. The method for training set construction, when applied to handwritten digit recognition, yielded a writerindependent recognition rate of 92%. The activation strength produced by network recognition is an effective statistical confidence measure of the accuracy of recognition. A method of using the activation strength for reclassification is described which when applied to handwritten digit recognition reduced substitutional errors to 2.2%. 1.0 Introduction This paper uses a three part method for writerindependent digit recognition. First, character images are used to calculate least squares optimized Gabor components. For the digit recognition problem, 32 Gabor basis funct...
Latent classification models
 Machine Learning
, 2005
"... Abstract. One of the simplest, and yet most consistently wellperforming set of classifiers is the Naïve Bayes models. These models rely on two assumptions: (i) All the attributes used to describe an instance are conditionally independent given the class of that instance, and (ii) all attributes fol ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Abstract. One of the simplest, and yet most consistently wellperforming set of classifiers is the Naïve Bayes models. These models rely on two assumptions: (i) All the attributes used to describe an instance are conditionally independent given the class of that instance, and (ii) all attributes follow a specific parametric family of distributions. In this paper we propose a new set of models for classification in continuous domains, termed latent classification models. The latent classification model can roughly be seen as combining the Naïve Bayes model with a mixture of factor analyzers, thereby relaxing the assumptions of the Naïve Bayes classifier. In the proposed model the continuous attributes are described by a mixture of multivariate Gaussians, where the conditional dependencies among the attributes are encoded using latent variables. We present algorithms for learning both the parameters and the structure of a latent classification model, and we demonstrate empirically that the accuracy of the proposed model is significantly higher than the accuracy of other probabilistic classifiers. Keywords: classification, probabilistic graphical models, Naïve Bayes, correlation
Neural Networks for Encoding and Adapting in Dynamic Economies
, 1995
"... this paper draw heavily on materials in chapters 3 and 4 of Sargent's Bounded Rationality in Macroeconomics, Oxford University Press, 1993. 2 Neural Networks for Encoding and Adapting in Dynamic Economies Neural Networks for Encoding and Adapting in Dynamic Economies Introduction ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
this paper draw heavily on materials in chapters 3 and 4 of Sargent's Bounded Rationality in Macroeconomics, Oxford University Press, 1993. 2 Neural Networks for Encoding and Adapting in Dynamic Economies Neural Networks for Encoding and Adapting in Dynamic Economies Introduction
Interpreting canonical correlation analysis through biplots of structural correlations and weights
 Psychometrika
, 1990
"... This paper extends the biplot technique to canonical correlation analysis and redundancy analysis, The plot of structure correlations is shown to be optimal for displaying the pairwise correlations between the variables of the one set and those of the second. The link between multivariate regression ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
This paper extends the biplot technique to canonical correlation analysis and redundancy analysis, The plot of structure correlations is shown to be optimal for displaying the pairwise correlations between the variables of the one set and those of the second. The link between multivariate regression and canonical correlation analysis/redundancy analysis is exploited for producing an optimal biplot that displays a matrix of regression coefficients. This plot can be made from the canonical weights of the predictors and the structure correlations of the criterion variables. An example is used to show how the proposed biptots may be interpreted. Key words: biplot, canonical correlation analysis, canonical weight, interbattery factor analysis, partial analysis, redundancy analysis, regression coefficient, reduced rank regression, structure correlations.