Results 1  10
of
84
Independent Component Analysis
 Neural Computing Surveys
, 2001
"... A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the ..."
Abstract

Cited by 1550 (93 self)
 Add to MetaCart
A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the original data. Wellknown linear transformation methods include, for example, principal component analysis, factor analysis, and projection pursuit. A recently developed linear transformation method is independent component analysis (ICA), in which the desired representation is the one that minimizes the statistical dependence of the components of the representation. Such a representation seems to capture the essential structure of the data in many applications. In this paper, we survey the existing theory and methods for ICA. 1
Fast and robust fixedpoint algorithms for independent component analysis
 IEEE TRANS. NEURAL NETW
, 1999
"... Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. In this paper, we use a combination of two different approaches for linear ICA: Comon’s informat ..."
Abstract

Cited by 535 (34 self)
 Add to MetaCart
Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. In this paper, we use a combination of two different approaches for linear ICA: Comon’s informationtheoretic approach and the projection pursuit approach. Using maximum entropy approximations of differential entropy, we introduce a family of new contrast (objective) functions for ICA. These contrast functions enable both the estimation of the whole decomposition by minimizing mutual information, and estimation of individual independent components as projection pursuit directions. The statistical properties of the estimators based on such contrast functions are analyzed under the assumption of the linear mixture model, and it is shown how to choose contrast functions that are robust and/or of minimum variance. Finally, we introduce simple fixedpoint algorithms for practical optimization of the contrast functions. These algorithms optimize the contrast functions very fast and reliably.
Synthesizing Bidirectional Texture Functions for RealWorld Surfaces
, 2001
"... In this paper, we present a novel approach to synthetically generating bidirectional texture functions (BTFs) of realworld surfaces. Unlike a conventional twodimensional texture, a BTF is a sixdimensional function that describes the appearance of texture as a function of illumination and viewing d ..."
Abstract

Cited by 59 (6 self)
 Add to MetaCart
In this paper, we present a novel approach to synthetically generating bidirectional texture functions (BTFs) of realworld surfaces. Unlike a conventional twodimensional texture, a BTF is a sixdimensional function that describes the appearance of texture as a function of illumination and viewing directions. The BTF captures the appearance change caused by visible smallscale geometric details on surfaces. From a sparse set of images under different viewing /lighting settings, our approach generates BTFs in three steps. First, it recovers approximate 3D geometry of surface details using a shapefromshading method. Then, it generates a novel version of the geometric details that has the same statistical properties as the sample surface with a nonparametric sampling method. Finally, it employs an appearance preserving procedure to synthesize novel images for the recovered or generated geometric details under various viewing/lighting settings, which then define a BTF. Our experimental results demonstrate the effectiveness of our approach. CR Categories: I.2.10 [Artificial Intelligence]: Vision and Scene Understandingmodeling and recovery of physical attributes I.3.7 [Computer Graphics]: Threedimensional Graphics and Realismcolor, shading, shadowing, and texture I.4.8 [Image Processing]: Scene Analysiscolor, photometry, shading Keywords: Bidirectional Texture Functions, Reflectance and Shading Models, Texture Synthesis, ShapefromShading, Photometric Stereo, ImageBased Rendering.
On Incremental and Robust Subspace Learning
 Pattern Recognition
, 2003
"... Principal Component Analysis (PCA) has been of great interest in computer vision and pattern recognition. In particular, incrementally learning a PCA model, which is computationally e#cient for large scale problems as well as adaptable to reflect the variable state of a dynamic system, is an att ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
Principal Component Analysis (PCA) has been of great interest in computer vision and pattern recognition. In particular, incrementally learning a PCA model, which is computationally e#cient for large scale problems as well as adaptable to reflect the variable state of a dynamic system, is an attractive research topic with numerous applications such as adaptive background modelling and active object recognition. In addition, the conventional PCA, in the sense of least mean squared error minimisation, is susceptible to outlying measurements.
Information geometry of UBoost and Bregman divergence
 Neural Computation
, 2004
"... We aim to extend from AdaBoost to UBoost in the paradigm to build up a stronger classification machine in a set of weak learning machines. A geometric understanding for the Bregman divergence defined by a generic function U being convex leads to UBoost method in the framework of information geomet ..."
Abstract

Cited by 24 (8 self)
 Add to MetaCart
We aim to extend from AdaBoost to UBoost in the paradigm to build up a stronger classification machine in a set of weak learning machines. A geometric understanding for the Bregman divergence defined by a generic function U being convex leads to UBoost method in the framework of information geometry for the finite measure functions over the label set. We propose two versions of UBoost learning algorithms by taking whether the domain is restricted to the space of probability functions or not. In the sequential step we observe that the two adjacent and the initial classifiers associate with a right triangle in the scale via the Bregman divergence, called the Pythagorean relation. This leads to a mild convergence property of the UBoost algorithm as seen in the EM algorithm. Statistical discussion for consistency and robustness elucidates the properties of UBoost methods based on a probabilistic assumption for a training data. 1
Bayesian Statistics
 in WWW', Computing Science and Statistics
, 1989
"... ∗ Signatures are on file in the Graduate School. This dissertation presents two topics from opposite disciplines: one is from a parametric realm and the other is based on nonparametric methods. The first topic is a jackknife maximum likelihood approach to statistical model selection and the second o ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
∗ Signatures are on file in the Graduate School. This dissertation presents two topics from opposite disciplines: one is from a parametric realm and the other is based on nonparametric methods. The first topic is a jackknife maximum likelihood approach to statistical model selection and the second one is a convex hull peeling depth approach to nonparametric massive multivariate data analysis. The second topic includes simulations and applications on massive astronomical data. First, we present a model selection criterion, minimizing the KullbackLeibler distance by using the jackknife method. Various model selection methods have been developed to choose a model of minimum KullbackLiebler distance to the true model, such as Akaike information criterion (AIC), Bayesian information criterion (BIC), Minimum description length (MDL), and Bootstrap information criterion. Likewise, the jackknife method chooses a model of minimum KullbackLeibler distance through bias reduction. This bias, which is inevitable in model
Noise Reduction in Images: Some Recent EdgePreserving Methods
, 1999
"... We introduce some recent and very recent smoothing methods which focus on the preservation of boundaries, spikes and canyons in presence of noise. We try to point out basic principles they have in common; the most important one is the robustness aspect. It is reflected by the use of `cup functions&a ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
We introduce some recent and very recent smoothing methods which focus on the preservation of boundaries, spikes and canyons in presence of noise. We try to point out basic principles they have in common; the most important one is the robustness aspect. It is reflected by the use of `cup functions' in the statistical loss functions instead of squares; such cup functions were introduced early in robust statistics to downweight outliers. Basically, they are variants of truncated squares. We discuss all the methods in the common framework of `energy functions', i.e we associate to (most of ) the algorithms a `loss function' in such a fashion that the output of the algorithm or the `estimate' is a global or local minimum of this loss function. The third aspect we pursue is the correspondence between loss functions and their local minima and nonlinear filters. We shall argue that the nonlinear filters can be interpreted as variants of gradient descent on the loss functions. This way we can ...
Independent Component Analysis by Minimization of Mutual Information
, 1997
"... Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. In this paper, the linear version of the ICA problem is approached from an informationtheoretic ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. In this paper, the linear version of the ICA problem is approached from an informationtheoretic viewpoint, using Comon's framework of minimizing mutual information of the components. Using maximum entropy approximations of dioeerential entropy, we introduce a family of new contrast (objective) functions for ICA, which can also be considered 1D projection pursuit indexes. The statistical properties of the estimators based on such contrast functions are analyzed under the assumption of the linear mixture model. It is shown how to choose optimal contrast functions according to dioeerent criteria. Novel algorithms for maximizing the contrast functions are then introduced. Hebbianlike learning rules are shown to result from gradient descent methods. Finally, in order to speed up the conv...
Order Statistics Learning Vector Quantizer
 IEEE Trans. on Image Processing
, 1995
"... In this correspondence, we propose a novel class of Learning Vector Quantizers (LVQs) based on multivariate data ordering principles. A special case of the novel LVQ class is the Median LVQ, which uses either the marginal median or the vector median as a multivariate estimator of location. The perfo ..."
Abstract

Cited by 15 (11 self)
 Add to MetaCart
In this correspondence, we propose a novel class of Learning Vector Quantizers (LVQs) based on multivariate data ordering principles. A special case of the novel LVQ class is the Median LVQ, which uses either the marginal median or the vector median as a multivariate estimator of location. The performance of the proposed marginal median LVQ in color image quantization is demonstrated by experiments. 1 Introduction Neural networks (NN) [1, 2] is a rapidly expanding research field which attracted the attention of scientists and engineers in the last decade. A large variety of artificial neural networks has been developed based on a multitude of learning techniques and having different topologies [2]. One prominent example of neural networks is the Learning Vector Quantizer (LVQ). It is an autoassociative nearestneighbor classifier which classifies arbitrary patterns into classes using an error correction encoding procedure related to competitive learning [1]. In order to make a distinct...
ExampleBased Hair Geometry Synthesis
"... level2 geometry reconstructed from (e); (g) level1 of the output hierarchy; (h) final output hair geometry. We present an examplebased approach to hair modeling because creating hairstyles either manually or through imagebased acquisition is a costly and timeconsuming process. We introduce a hi ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
level2 geometry reconstructed from (e); (g) level1 of the output hierarchy; (h) final output hair geometry. We present an examplebased approach to hair modeling because creating hairstyles either manually or through imagebased acquisition is a costly and timeconsuming process. We introduce a hierarchical hair synthesis framework that views a hairstyle both as a 3D vector field and a 2D arrangement of hair strands on the scalp. Since hair forms wisps, a hierarchical hair clustering algorithm has been developed for detecting wisps in example hairstyles. The coarsest level of the output hairstyle is synthesized using traditional 2D texture synthesis techniques. Synthesizing finer levels of the hierarchy is based on cluster oriented detail transfer. Finally, we compute a discrete tangent vector field from the synthesized hair at every level of the hierarchy to remove undesired inconsistencies among hair trajectories. Improved hair trajectories can be extracted from the vector field. Based on our automatic hair synthesis method, we have also developed simple usercontrolled synthesis and editing techniques including featurepreserving combing as well as detail transfer between different hairstyles.