Results 1 
4 of
4
Derandomized Dimensionality Reduction with Applications
 In Proc. 13th ACMSIAM Sympos. Discrete Algorithms
, 2002
"... The JohnsonLindenstrauss lemma provides a way to map a number of points in highdimensional space into a lowdimensional space, with only a small distortion of the distances between the points. The proofs of the lemma are nonconstructive: they show that a random mapping induces small distortions w ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
The JohnsonLindenstrauss lemma provides a way to map a number of points in highdimensional space into a lowdimensional space, with only a small distortion of the distances between the points. The proofs of the lemma are nonconstructive: they show that a random mapping induces small distortions with high probability, but they do not construct the actual mapping. In this paper, we provide a procedure that constructs such a mapping deterministically in time almost linear in the number of distances to preserve times the dimension of the original space. We then use that result (together with Nisan's pseudorandom generator) to obtain an efficient derandomization of several approximation algorithms based on semidefinite programming.
Compressed Fisher Linear Discriminant Analysis: Classification of Randomly Projected Data
 In Proceedings16th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD
, 2010
"... We consider random projections in conjunction with classification,specificallytheanalysisofFisher’sLinearDiscriminant (FLD) classifier in randomly projected data spaces. Unlike previous analyses of other classifiers in this setting, we avoid the unnatural effects that arise when one insists that all ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
We consider random projections in conjunction with classification,specificallytheanalysisofFisher’sLinearDiscriminant (FLD) classifier in randomly projected data spaces. Unlike previous analyses of other classifiers in this setting, we avoid the unnatural effects that arise when one insists that all pairwise distances are approximately preserved under projection. We impose no sparsity or underlying lowdimensional structure constraints on the data; we instead take advantage of the class structure inherent in the problem. We obtain a reasonably tight upper bound on the estimated misclassification error on average over the random choice of the projection, which, in contrast to early distance preserving approaches, tightens in a natural way as the number of training examples increases. It follows that, for good generalisation of FLD, the required projection dimension grows logarithmically with the number of classes. We also show that the error contribution of a covariance misspecification is always no worse in the lowdimensional space than in the initial highdimensional space. We contrast our findings to previous related work, and discuss our insights.
High Performance Algorithms for Multiple Streaming Time Series
, 2006
"... “To my parents and my wife, for all they did for me” Dedicated to all that helped me v Acknowledgements This dissertation would never have materialized without the contribution of many individuals to whom I have the pleasure of expressing my appreciation and gratitude. First of all, I gratefully ack ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
“To my parents and my wife, for all they did for me” Dedicated to all that helped me v Acknowledgements This dissertation would never have materialized without the contribution of many individuals to whom I have the pleasure of expressing my appreciation and gratitude. First of all, I gratefully acknowledge the persistent support and encouragement from my advisor, Professor Dennis Shasha. He provided constant academic guidance and inspired many of the ideas presented here. Dennis is a superb teacher and a great friend. Secondly, I wish to express my deep gratitude to Professor Richard Cole. He has been offering his generous help since the beginning of my Ph.D. study, which is not limited to academic research. In particular, his help was indispensable for me to go through my first semester at NYU, four extremely tough months.
A bound on the performance of LDA in randomly projected data spaces
"... We consider the problem of classification in nonadaptive dimensionality reduction. Specifically, we bound the increase in classification error of Fisher’s Linear Discriminant classifier resulting from randomly projecting the high dimensional data into a lower dimensional space and both learning the ..."
Abstract
 Add to MetaCart
We consider the problem of classification in nonadaptive dimensionality reduction. Specifically, we bound the increase in classification error of Fisher’s Linear Discriminant classifier resulting from randomly projecting the high dimensional data into a lower dimensional space and both learning the classifier and performing the classification in the projected space. Our bound is reasonably tight, and unlike existing bounds on learning from randomly projected data, it becomes tighter as the quantity of training data increases without requiring any sparsity structure from the data. 1