Results 1 
5 of
5
Derandomized Dimensionality Reduction with Applications
 In Proc. 13th ACMSIAM Sympos. Discrete Algorithms
, 2002
"... The JohnsonLindenstrauss lemma provides a way to map a number of points in highdimensional space into a lowdimensional space, with only a small distortion of the distances between the points. The proofs of the lemma are nonconstructive: they show that a random mapping induces small distortions w ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
The JohnsonLindenstrauss lemma provides a way to map a number of points in highdimensional space into a lowdimensional space, with only a small distortion of the distances between the points. The proofs of the lemma are nonconstructive: they show that a random mapping induces small distortions with high probability, but they do not construct the actual mapping. In this paper, we provide a procedure that constructs such a mapping deterministically in time almost linear in the number of distances to preserve times the dimension of the original space. We then use that result (together with Nisan's pseudorandom generator) to obtain an efficient derandomization of several approximation algorithms based on semidefinite programming.
Compressed Fisher Linear Discriminant Analysis: Classification of Randomly Projected Data
 In Proceedings16th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD
, 2010
"... We consider random projections in conjunction with classification,specificallytheanalysisofFisher’sLinearDiscriminant (FLD) classifier in randomly projected data spaces. Unlike previous analyses of other classifiers in this setting, we avoid the unnatural effects that arise when one insists that all ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
We consider random projections in conjunction with classification,specificallytheanalysisofFisher’sLinearDiscriminant (FLD) classifier in randomly projected data spaces. Unlike previous analyses of other classifiers in this setting, we avoid the unnatural effects that arise when one insists that all pairwise distances are approximately preserved under projection. We impose no sparsity or underlying lowdimensional structure constraints on the data; we instead take advantage of the class structure inherent in the problem. We obtain a reasonably tight upper bound on the estimated misclassification error on average over the random choice of the projection, which, in contrast to early distance preserving approaches, tightens in a natural way as the number of training examples increases. It follows that, for good generalisation of FLD, the required projection dimension grows logarithmically with the number of classes. We also show that the error contribution of a covariance misspecification is always no worse in the lowdimensional space than in the initial highdimensional space. We contrast our findings to previous related work, and discuss our insights.
High Performance Algorithms for Multiple Streaming Time
 York University
, 2006
"... ..."
(Show Context)
Manjish Pal
"... Statistical distance measures have found wide applicability in information retrieval tasks that typically involve high dimensional datasets. In order to reduce the storage space and ensure efficient performance of queries, dimensionality reduction while preserving the interpoint similarity is hig ..."
Abstract
 Add to MetaCart
(Show Context)
Statistical distance measures have found wide applicability in information retrieval tasks that typically involve high dimensional datasets. In order to reduce the storage space and ensure efficient performance of queries, dimensionality reduction while preserving the interpoint similarity is highly desirable. In this paper, we investigate various statistical distance measures from the point of view of discovering low distortion embeddings into lowdimensional spaces. More specifically, we consider the Mahalanobis distance measure, the Bhattacharyya class of divergences and the KullbackLeibler divergence. We present a dimensionality reduction method based on the JohnsonLindenstrauss Lemma for the Mahalanobis measure that achieves arbitrarily low distortion. By using the JohnsonLindenstrauss Lemma again, we
A bound on the performance of LDA in randomly projected data spaces
"... We consider the problem of classification in nonadaptive dimensionality reduction. Specifically, we bound the increase in classification error of Fisher’s Linear Discriminant classifier resulting from randomly projecting the high dimensional data into a lower dimensional space and both learning the ..."
Abstract
 Add to MetaCart
(Show Context)
We consider the problem of classification in nonadaptive dimensionality reduction. Specifically, we bound the increase in classification error of Fisher’s Linear Discriminant classifier resulting from randomly projecting the high dimensional data into a lower dimensional space and both learning the classifier and performing the classification in the projected space. Our bound is reasonably tight, and unlike existing bounds on learning from randomly projected data, it becomes tighter as the quantity of training data increases without requiring any sparsity structure from the data. 1