Results 1  10
of
689,934
Sparser JohnsonLindenstrauss Transforms
"... We give two different constructions for dimensionality reduction in ℓ2 via linear mappings that are sparse: only an O(ε)fraction of entries in each column of our embedding matrices are nonzero to achieve distortion 1+ε with high probability, while still achieving the asymptotically optimal number ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
We give two different constructions for dimensionality reduction in ℓ2 via linear mappings that are sparse: only an O(ε)fraction of entries in each column of our embedding matrices are nonzero to achieve distortion 1+ε with high probability, while still achieving the asymptotically optimal number
Sparser JohnsonLindenstrauss Transforms
"... We give two different and simple constructions for dimensionality reduction in `2 via linear mappings that are sparse: only an O(ε)fraction of entries in each column of our embedding matrices are nonzero to achieve distortion 1 + ε with high probability, while still achieving the asymptotically op ..."
Abstract
 Add to MetaCart
We give two different and simple constructions for dimensionality reduction in `2 via linear mappings that are sparse: only an O(ε)fraction of entries in each column of our embedding matrices are nonzero to achieve distortion 1 + ε with high probability, while still achieving the asymptotically
Sparser JohnsonLindenstrauss Transforms
"... We give two different and simple constructions for dimensionality reduction in ℓ2 via linear mappings that are sparse: only an O(ε)fraction of entries in each column of our embedding matrices are nonzero to achieve distortion 1 + ε with high probability, while still achieving the asymptotically op ..."
Abstract
 Add to MetaCart
We give two different and simple constructions for dimensionality reduction in ℓ2 via linear mappings that are sparse: only an O(ε)fraction of entries in each column of our embedding matrices are nonzero to achieve distortion 1 + ε with high probability, while still achieving the asymptotically
Sparser JohnsonLindenstrauss Transforms
"... We give two different JohnsonLindenstrauss distributions, each with column sparsity s = Θ(ε −1 log(1/δ)) and embedding into optimal dimension k = O(ε −2 log(1/δ)) to achieve distortion 1±ε with probability 1−δ. That is, only an O(ε)fraction of entries are nonzero in each embedding matrix in the s ..."
Abstract
 Add to MetaCart
We give two different JohnsonLindenstrauss distributions, each with column sparsity s = Θ(ε −1 log(1/δ)) and embedding into optimal dimension k = O(ε −2 log(1/δ)) to achieve distortion 1±ε with probability 1−δ. That is, only an O(ε)fraction of entries are nonzero in each embedding matrix
Sparser JohnsonLindenstrauss Transforms
"... We give two different JohnsonLindenstrauss distributions, each with column sparsity s = Θ(ε −1 log(1/δ)) and embedding into optimal dimension k = O(ε −2 log(1/δ)) to achieve distortion 1±ε with probability 1−δ. That is, only an O(ε)fraction of entries are nonzero in each embedding matrix in the s ..."
Abstract
 Add to MetaCart
We give two different JohnsonLindenstrauss distributions, each with column sparsity s = Θ(ε −1 log(1/δ)) and embedding into optimal dimension k = O(ε −2 log(1/δ)) to achieve distortion 1±ε with probability 1−δ. That is, only an O(ε)fraction of entries are nonzero in each embedding matrix
Sparser JohnsonLindenstrauss Transforms
"... We give two different JohnsonLindenstrauss distributions, each with column sparsity s = Θ(ε −1 log(1/δ)) and embedding into optimal dimension k = O(ε −2 log(1/δ)) to achieve distortion 1 ± ε with probability 1 − δ. That is, only an O(ε)fraction of entries are nonzero in each embedding matrix in t ..."
Abstract
 Add to MetaCart
We give two different JohnsonLindenstrauss distributions, each with column sparsity s = Θ(ε −1 log(1/δ)) and embedding into optimal dimension k = O(ε −2 log(1/δ)) to achieve distortion 1 ± ε with probability 1 − δ. That is, only an O(ε)fraction of entries are nonzero in each embedding matrix
Sparser JohnsonLindenstrauss Transforms
"... We give two different JohnsonLindenstrauss distributions, each with column sparsity s = Θ(ε −1 log(1/δ)) and embedding into optimal dimension k = O(ε −2 log(1/δ)) to achieve distortion 1 ± ε with probability 1 − δ. That is, only an O(ε)fraction of entries are nonzero in each embedding matrix in t ..."
Abstract
 Add to MetaCart
We give two different JohnsonLindenstrauss distributions, each with column sparsity s = Θ(ε −1 log(1/δ)) and embedding into optimal dimension k = O(ε −2 log(1/δ)) to achieve distortion 1 ± ε with probability 1 − δ. That is, only an O(ε)fraction of entries are nonzero in each embedding matrix
A Sparser JohnsonLindenstrauss Transform
"... We give a JohnsonLindenstrauss transform with column sparsity s = Θ(ε −1 log(1/δ)) into optimal dimension k = O(ε −2 log(1/δ)) to achieve distortion 1±ε with success probability 1−δ. This is the first distribution to provide an asymptotic improvement over the Θ(k) sparsity bound for all values of ε ..."
Abstract
 Add to MetaCart
We give a JohnsonLindenstrauss transform with column sparsity s = Θ(ε −1 log(1/δ)) into optimal dimension k = O(ε −2 log(1/δ)) to achieve distortion 1±ε with success probability 1−δ. This is the first distribution to provide an asymptotic improvement over the Θ(k) sparsity bound for all values
A sparse JohnsonLindenstrauss transform
 In Proceedings of the 42nd ACM Symposium on Theory of Computing (STOC
, 2010
"... ar ..."
Results 1  10
of
689,934