Results 1  10
of
19
Nonlinear projection with curvilinear distances: Isomap versus curvilinear distance analysis
, 2004
"... Didixdx reductid technidxd arewidCC used for theanalysi andviPHx#EdqPTP of complex sets of data.Thi paper compares two recently publilyd methods fornonliTEH projectiqP Isomap andCurvikdqCE DidCurv Analysi (CDA).Contrari# to the tradiaddPH liddP PCA, these methods work lik multiSEkdqCWxEH scaliS byre ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
Didixdx reductid technidxd arewidCC used for theanalysi andviPHx#EdqPTP of complex sets of data.Thi paper compares two recently publilyd methods fornonliTEH projectiqP Isomap andCurvikdqCE DidCurv Analysi (CDA).Contrari# to the tradiaddPH liddP PCA, these methods work lik multiSEkdqCWxEH scaliS byreproduciW i the projectiq space thepaikTTx diikTTx measuredi the data space. However, they diyd from theclassiPT lissi MDS by the metrik they use andby the way theybuidCPT mappiP (algebrai or neural).Whia Isomap relia diapdk on the tradiWkdqC MDS, CDAi based on a nonlikdq varik of MDS, called CurvidkkkH ComponentAnalysi (CCA). Although Isomap andCDA share the samemetrix the compariEd hiariEdq thei respectiq strengths andweaknesses. c 2004Elsevi# B.V. AllridTW reserved. Keywords Nonlidxd projectidi Nonlicti diictidixd reductid Geodesi didesi Curvidi diurvi 1. I363f146 Whenanalyzix huge sets ofnumerixd data, problems often occur when the raw data are hiTTECxdqTxkSkdi For example, thesediedCSEEk aretypikP i domaiP lia ili processid (large number ofpiCPHC orbiWTkCdqE siWT analysi (numerous captors).
The curse of dimensionality in data mining and time series prediction
 Computational Intelligence and Bioinspired Systems, Lecture Notes in Computer Science 3512
, 2005
"... www.ucl.ac.be/mlg Abstract. Modern data analysis tools have to work on highdimensional data, whose components are not independently distributed. Highdimensional spaces show surprising, counterintuitive geometrical properties that have a large influence on the performances of data analysis tools. ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
www.ucl.ac.be/mlg Abstract. Modern data analysis tools have to work on highdimensional data, whose components are not independently distributed. Highdimensional spaces show surprising, counterintuitive geometrical properties that have a large influence on the performances of data analysis tools. Among these properties, the concentration of the norm phenomenon results in the fact that Euclidean norms and Gaussian kernels, both commonly used in models, become inappropriate in highdimensional spaces. This papers presents alternative distance measures and kernels, together with geometrical methods to decrease the dimension of the space. The methodology is applied to a typical time series prediction example. 1
Nonlinear Dimensionality Reduction of Data Manifolds With Essential Loops
, 2005
"... Numerous methods or algorithms have been designed to solve the problem of nonlinear dimensionality reduction (NLDR). However, very few among them are able to embed efficiently `circular' manifolds like cylinders or tori, which have one or more essential loops. This paper presents a simple and fast p ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Numerous methods or algorithms have been designed to solve the problem of nonlinear dimensionality reduction (NLDR). However, very few among them are able to embed efficiently `circular' manifolds like cylinders or tori, which have one or more essential loops. This paper presents a simple and fast procedure that can tear or cut those manifolds, i.e. break their essential loops, in order to make their embedding in a lowdimensional space easier. The key idea is the following: starting from the available data points, the tearing procedure represents the underlying manifold by a graph and then builds a maximum subgraph with no loops anymore. Because it works with a graph, the procedure can preprocess data for all NLDR techniques that uses the same representation. Recent techniques using geodesic distances (Isomap, geodesic Sammon's mapping, geodesic CCA, etc.) or $K$ary neighborhoods (LLE, hLLE, Laplacian eigenmaps) fall in that category. After describing the tearing procedure in details, the paper comments a few experimental results.
Local Linear Projection (LLP)
 in Proc. of First Workshop on Genomic Signal Processing and Statistics (GENSIPS
, 2002
"... Dimensionality reduction has important applications in exploratory data analysis. A method based on Local Linear Projection (LLP) is proposed. The advantage of this method is that it is robust against uncertainty. Statistical analysis is applied to estimate parameters. Simulational results on synthe ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
Dimensionality reduction has important applications in exploratory data analysis. A method based on Local Linear Projection (LLP) is proposed. The advantage of this method is that it is robust against uncertainty. Statistical analysis is applied to estimate parameters. Simulational results on synthetic data are promising. Some preliminary experiment of applying this method to microarray data is reported. The results show that LLP can identify significant patterns. We propose some future tasks to perfect this method.
Quality assessment of dimensionality reduction: Rankbased criteria
 NEUROCOMPUTING
, 2009
"... ..."
On the Effects of Dimensionality on Data Analysis With Neural Networks
, 2003
"... Modern data analysis often faces highdimensional data. ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Modern data analysis often faces highdimensional data.
Learning topology with the generative gaussian graph and the EM algorithm
 Advances in Neural Information Processing Systems
, 2006
"... Given a set of points and a set of prototypes representing them, how to create a graph of the prototypes whose topology accounts for that of the points? This problem had not yet been explored in the framework of statistical learning theory. In this work, we propose a generative model based on the De ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Given a set of points and a set of prototypes representing them, how to create a graph of the prototypes whose topology accounts for that of the points? This problem had not yet been explored in the framework of statistical learning theory. In this work, we propose a generative model based on the Delaunay graph of the prototypes and the ExpectationMaximization algorithm to learn the parameters. This work is a first step towards the construction of a topological model of a set of points grounded on statistics. 1
Speech dimensionality analysis on hypercubical selforganizing maps
 Maps,” Neural Processing Letters
, 2001
"... Abstract. The problem of finding the intrinsic dimension of speech is addressed in this paper. A structured vector quantization lattice, SelfOrganizing Map (SOM), is used as a projection space for the data. The goal is to find a hypercubical SOM lattice where the sequences of projected speech featu ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. The problem of finding the intrinsic dimension of speech is addressed in this paper. A structured vector quantization lattice, SelfOrganizing Map (SOM), is used as a projection space for the data. The goal is to find a hypercubical SOM lattice where the sequences of projected speech feature vectors form continuous trajectories. The effect of varying the dimension of the lattice is investigated using feature vector sequences computed from the TIMIT database.
Locally Linear Embedding versus Isotop
 Proceedings of ESANN’2003, 11th European Symposium on Artificial Neural Networks
, 2003
"... Recently, a new method intended to realize conformal mappings has been published. Called Locally Linear Embedding (LLE), this method can map highdimensional data lying on a manifold to a representation of lower dimensionality that preserves the angles. Although LLE is claimed to solve problems that ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Recently, a new method intended to realize conformal mappings has been published. Called Locally Linear Embedding (LLE), this method can map highdimensional data lying on a manifold to a representation of lower dimensionality that preserves the angles. Although LLE is claimed to solve problems that are usually managed by neural networks like Kohonen's SelfOrganizing Maps (SOMs), the method reduces to an elegant eigenproblem with desirable properties (no parameter tuning, no local minima, etc.). The purpose of this paper consists in comparing the capabilities of LLE with a newly developed neural method called Isotop and based on ideas like neighborhood preservation, which has been the key of the SOMs' success. To illustrate the di#erences between the algebraic and the neural approach, LLE and Isotop are first briefly described and then compared with well known dimensionality reduction problems. 1
How to Project `Circular' Manifolds Using Geodesic Distances?
, 2004
"... Recent papers have clearly shown the advantage of using the geodesic distance instead of the Euclidean one in methods performing nonlinear dimensionality reduction by means of distance preservation. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Recent papers have clearly shown the advantage of using the geodesic distance instead of the Euclidean one in methods performing nonlinear dimensionality reduction by means of distance preservation.