Results 1  10
of
10
An evaluation of the use of Multidimensional Scaling for understanding brain connectivity
 Philosophical Transactions of the Royal Society, Series B
, 1994
"... A large amount of data is now available about the pattern of connections between brain regions. Computational methods are increasingly relevant for uncovering structure in such datasets. There has been recent interest in the use of Nonmetric Multidimensional Scaling (NMDS) for such analysis (Young, ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
A large amount of data is now available about the pattern of connections between brain regions. Computational methods are increasingly relevant for uncovering structure in such datasets. There has been recent interest in the use of Nonmetric Multidimensional Scaling (NMDS) for such analysis (Young, 1992, 1993; Scannell & Young, 1993). NMDS produces a spatial representation of the "dissimilarities" between a number of entities. Normally, it is applied to data matrices containing a large number of levels of dissimilarity, whereas for connectivity data there is a very small number. We address the suitability of NMDS for this case. Systematic numerical studies are presented to evaluate the ability of this method to reconstruct known geometrical configurations from dissimilarity data possessing few levels. In this case there is a strong bias for NMDS to produce annular configurations, whether or not such structure exists in the original data. Using a connectivity dataset derived from the pr...
The proximity structure of achromatic surface colors and the impossibility of asymmetric lightness matching
, 2006
"... ..."
Graph Layout Techniques and Multidimensional Data Analysis
, 2000
"... In this paper we explore the relationship between multivariate data analysis and techniques for graph drawing or graph layout. Although both classes of techniques were created for quite different purposes, we find many common principles and implementations. We start with a discussion of the data an ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In this paper we explore the relationship between multivariate data analysis and techniques for graph drawing or graph layout. Although both classes of techniques were created for quite different purposes, we find many common principles and implementations. We start with a discussion of the data analysis techniques, in particular multiple correspondence analysis, multidimensional scaling, parallel coordinate plotting, and seriation. We then discuss parallels in the graph layout literature.
Data Mining of Early Day Motions and Multiscale Variance Stabilisation of Count Data
, 2008
"... A dissertation submitted to the University of Bristol in accordance with the requirements ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
A dissertation submitted to the University of Bristol in accordance with the requirements
A Scaled Logistic QuasiSimplex is a Football and its Stress is not a Function of the Number of Points
, 1977
"... It is shown that logistically distributed responses scaled in two or more dimensions resemble a football, not a horseshoe. Further, the stress of the scaled solution is a function of the slope of the response functions, not the number of responses. Kendall (1971) and others have noted that a quasi s ..."
Abstract
 Add to MetaCart
It is shown that logistically distributed responses scaled in two or more dimensions resemble a football, not a horseshoe. Further, the stress of the scaled solution is a function of the slope of the response functions, not the number of responses. Kendall (1971) and others have noted that a quasi simplex scaled in two or more dimensions frequently resembles a horseshoe because extreme distances are usually truncated in real data. We show here that for data fitting a logistic response model (e.g., Rasch, 1960), squared Euclidean distances between items scale as a football, not a horseshoe. Further, the stress of the scaled model is a function of the slope of the item response functions, not the number of items. We use test theory terminology to motivate the notation; other logistic response models fit with minor modification. Consider a collection of N subjects divided according to ability into g groups consisting of n1, n2,..., ng subjects, respectively. Assume they are tested on m items arranged in increasing order of difficulty. Let aijk be a random variable which is the score (0 or
Seriation in the Presence of Errors: A Factor 16 Approximation Algorithm for l∞Fitting Robinson Structures to Distances
 ALGORITHMICA
, 2007
"... The classical seriation problem consists in finding a permutation of the rows and the columns of the distance (or, more generally, dissimilarity) matrix d on a finite set X so that small values should be concentrated around the main diagonal as close as possible, whereas large values should fall as ..."
Abstract
 Add to MetaCart
The classical seriation problem consists in finding a permutation of the rows and the columns of the distance (or, more generally, dissimilarity) matrix d on a finite set X so that small values should be concentrated around the main diagonal as close as possible, whereas large values should fall as far from it as possible. This goal is best achieved by considering the Robinson property: a distance dR on X is Robinsonian if its matrix can be symmetrically permuted so that its elements do not decrease when moving away from the main diagonal along any row or column. If the distance d fails to satisfy the Robinson property, then we are lead to the problem of finding a reordering of d which is as close as possible to a Robinsonian distance. In this paper, we present a factor 16 approximation algorithm for the following NPhard fitting problem: given a finite set X and a dissimilarity d on X, wewish to find a Robinsonian dissimilarity dR on X minimizing the lâerror âd â dRâ â = maxx,yâX{d(x,y) â dR(x, y)} between d and dR.
and
, 2010
"... Word list with short explaination in the areas of neuroinformatics and statistics. 1 abundance matrix: A data matrix X(N ×P) that contains actual numbers of occurrences or proportions (Kendall, 1971) according to (Mardia et al., 1979, exercise 13.4.5). activation function: The nonlinear function in ..."
Abstract
 Add to MetaCart
Word list with short explaination in the areas of neuroinformatics and statistics. 1 abundance matrix: A data matrix X(N ×P) that contains actual numbers of occurrences or proportions (Kendall, 1971) according to (Mardia et al., 1979, exercise 13.4.5). activation function: The nonlinear function in the output of the unit in a neural network. Can be a threshold function, a piecewise linear function or a sigmoidal function, e.g., hyperbolic tangent or logistic sigmoid. If the activation function is on the output of the neural network it can be regarded as a link function. active learning: 1: The same as focusing (MacKay, 1992b). 2: supervised learning (Haykin,
Advance Access publication on June 14, 2007 Microarray learning with ABC
"... Standard clustering algorithms when applied to DNA microarray data often tend to produce erroneous clusters. A major contributor to this divergence is the feature characteristic of microarray data sets that the number of predictors (genes) in such data far exceeds the number of samples by many order ..."
Abstract
 Add to MetaCart
Standard clustering algorithms when applied to DNA microarray data often tend to produce erroneous clusters. A major contributor to this divergence is the feature characteristic of microarray data sets that the number of predictors (genes) in such data far exceeds the number of samples by many orders of magnitude, with only a small percentage of predictors being truly informative with regards to the clustering while the rest merely add noise. An additional complication is that the predictors exhibit an unknown complex correlational configuration embedded in a small subspace of the entire predictor space. Under these conditions, standard clustering algorithms fail to find the true clusters even when applied in tandem with some sort of gene filtering or dimension reduction to reduce the number of predictors. We propose, as an alternative, a novel method for unsupervised classification of DNA microarray data. The method, which is based on the idea of aggregating results obtained from an ensemble of randomly resampled data (where both samples and genes are resampled), introduces a way of tilting the procedure so that the ensemble includes minimal representation from less important areas of the gene predictor space. The method produces a measure of dissimilarity between each pair of samples that can be used in conjunction with (a) a method like Ward’s procedure to generate a cluster analysis and (b) multidimensional scaling to generate useful visualizations of the data. We call the dissimilarity measures ABC dissimilarities since