## Accelerated dense random projections (2009)

Citations: | 8 - 0 self |

### BibTeX

@TECHREPORT{Liberty09accelerateddense,

author = {Edo Liberty},

title = {Accelerated dense random projections},

institution = {},

year = {2009}

}

### OpenURL

### Abstract

In dimensionality reduction, a set of points in Rd is mapped into Rk, with the target dimension k smaller than the original dimension d, while distances between all pairs of points are approximately preserved. Currently popular methods for achieving this involve random projection, or choosing a linear mapping (a k × d matrix) from a distribution that is independent of the input points. Applying the mapping (chosen according to this distribution) is shown to give the desired property with at least constant probability. The contributions in this thesis are twofold. First, we provide a framework for designing such distributions. Second, we derive efficient random projection algorithms using this framework. Our results achieve performance exceeding other existing approaches. When the target dimension is significantly smaller than the original dimension we gain significant improvement by designing efficient algorithms for applying certain linear algebraic transforms.

### Citations

1941 |
The theory of error-correcting codes
- MacWilliams, Sloane
- 1977
(Show Context)
Citation Context ...ts a 4-wise independent code matrix of size k × m, where m = Θ(k 2 ). One such family of matrices is known as binary dual BCH codes of designed distance 5. Details of the construction can be found in =-=[38]-=-. The following is known as an interpolation theorem in the theory of Banach spaces. For a proof, the reader is referred to [39]. Theorem 5.1.1. [Riesz-Thorin] Let A be a matrix such that ‖A‖p1→r1 ≤ C... |

799 |
Matrix Multiplication via Arithmetic Progressions
- Coppersmith, Winograd
- 1990
(Show Context)
Citation Context ...d [44]. These methods are reported to be practically more efficient than naïve implementations. Classic positive results, achieving a polynomial speedup, by Strassen [45] and Coppersmith and Winograd =-=[46]-=- as well as lower bounds [47] for general matrix-matrix multiplications are known. These, however, do not extend to matrix-vector multiplication. We return to matrix-vector operations. Since every ent... |

713 | Approximate nearest neighbors: towards removing the curse of dimensionality, Proceedings of the thirtieth annual ACM symposium on Theory of computing
- Indyk, Motwani
- 1998
(Show Context)
Citation Context ... Linear Algebra, a projection matrix P is a square matrix such that P = P 2 . Here, a rectangular matrix Ψ is a said to be a projection if Ψ T Ψ = (Ψ T Ψ) 2 . 8approximate nearest neighbor searching =-=[5, 6, 7, 8, 9]-=-, learning [10, 11, 12], matrix low rank approximation [13, 14, 15, 16, 17, 18, 19, 20], other linear algebraic operations [21, 22, 23, 24], and many other algorithms and applications, e.g, [25, 26, 2... |

670 | Learning quickly when irrelevant attributes abound: a new linearthreshold algorithm
- Littlestone
- 1988
(Show Context)
Citation Context ...ion matrix P is a square matrix such that P = P 2 . Here, a rectangular matrix Ψ is a said to be a projection if Ψ T Ψ = (Ψ T Ψ) 2 . 8approximate nearest neighbor searching [5, 6, 7, 8, 9], learning =-=[10, 11, 12]-=-, matrix low rank approximation [13, 14, 15, 16, 17, 18, 19, 20], other linear algebraic operations [21, 22, 23, 24], and many other algorithms and applications, e.g, [25, 26, 27, 28, 29, 30]. 2.2 Cla... |

532 |
Basic linear algebra subprograms for fortran usage
- Lawson, Hanson, et al.
- 1979
(Show Context)
Citation Context ...sts of two nested loops ordered with respect to memory allocation to minimize cache faults. The second is an optimized matrix vector code: we test ourselves against LAPACK which uses BLAS subroutines =-=[49, 50]-=-. The third is, of course, the mailman algorithm, not including the O(mn) preprocessing stage. Although the complexity of the first two methods is O(n log(n)) and that of the mailman algorithm is O(n)... |

446 | An extended set of Fortran basic linear algebra subroutines
- DONGARRA, DUCROZ, et al.
- 1988
(Show Context)
Citation Context ...sts of two nested loops ordered with respect to memory allocation to minimize cache faults. The second is an optimized matrix vector code: we test ourselves against LAPACK which uses BLAS subroutines =-=[49, 50]-=-. The third is, of course, the mailman algorithm, not including the O(mn) preprocessing stage. Although the complexity of the first two methods is O(n log(n)) and that of the mailman algorithm is O(n)... |

405 | Extensions of Lipschitz mappings into a Hilbert space - Johnson, Lindenstrauss - 1984 |

373 |
Gaussian elimination is not optimal
- Strassen
- 1969
(Show Context)
Citation Context ... multiplications 50were also found [44]. These methods are reported to be practically more efficient than naïve implementations. Classic positive results, achieving a polynomial speedup, by Strassen =-=[45]-=- and Coppersmith and Winograd [46] as well as lower bounds [47] for general matrix-matrix multiplications are known. These, however, do not extend to matrix-vector multiplication. We return to matrix-... |

248 | Latent semantic indexing: a probabilistic analysis
- Papadimitriou, Raghavan, et al.
(Show Context)
Citation Context ..., 7, 8, 9], learning [10, 11, 12], matrix low rank approximation [13, 14, 15, 16, 17, 18, 19, 20], other linear algebraic operations [21, 22, 23, 24], and many other algorithms and applications, e.g, =-=[25, 26, 27, 28, 29, 30]-=-. 2.2 Classic results, review of known JL distributions The construction of Johnson and Lindenstrauss is surprisingly simple. They proposed choosing Ψ uniformly at random from the space of projection ... |

195 |
Interpolation Spaces
- Bergh, Löfström
- 1976
(Show Context)
Citation Context ...des of designed distance 5. Details of the construction can be found in [38]. The following is known as an interpolation theorem in the theory of Banach spaces. For a proof, the reader is referred to =-=[39]-=-. Theorem 5.1.1. [Riesz-Thorin] Let A be a matrix such that ‖A‖p1→r1 ≤ C1 and ‖A‖p2→r2 ≤ C2 for some norm indices p1, r1, p2, r2. Let λ be a real number in the interval [0, 1], and let p, r be such th... |

189 | Efficient search for approximate nearest neighbor in high dimensional spaces
- Kushilevitz, Ostrovsky, et al.
- 1998
(Show Context)
Citation Context ... Linear Algebra, a projection matrix P is a square matrix such that P = P 2 . Here, a rectangular matrix Ψ is a said to be a projection if Ψ T Ψ = (Ψ T Ψ) 2 . 8approximate nearest neighbor searching =-=[5, 6, 7, 8, 9]-=-, learning [10, 11, 12], matrix low rank approximation [13, 14, 15, 16, 17, 18, 19, 20], other linear algebraic operations [21, 22, 23, 24], and many other algorithms and applications, e.g, [25, 26, 2... |

176 | Fast Monte-Carlo algorithms for finding low-rank approximations
- Frieze, Kannan, et al.
- 1998
(Show Context)
Citation Context ...= P 2 . Here, a rectangular matrix Ψ is a said to be a projection if Ψ T Ψ = (Ψ T Ψ) 2 . 8approximate nearest neighbor searching [5, 6, 7, 8, 9], learning [10, 11, 12], matrix low rank approximation =-=[13, 14, 15, 16, 17, 18, 19, 20]-=-, other linear algebraic operations [21, 22, 23, 24], and many other algorithms and applications, e.g, [25, 26, 27, 28, 29, 30]. 2.2 Classic results, review of known JL distributions The construction ... |

147 |
Database-friendly random projections: Johnson-Lindenstrauss with binary coins
- Achlioptas
- 2003
(Show Context)
Citation Context ...n)/ε 2 ). The two previous chapters were dedicated to crafting different JL distributions for A such that it still admits a fast transform. We recall the result of Achlioptas. Lemma 7.3.1 (Achlioptas =-=[33]-=-). A k × d matrix A such that A(i, j) are i.i.d and ⎧ ⎪⎨ √1 w.p 1/2 k A(i, j) = ⎪⎩ − 1 √ w.p 1/2 k (7.1) exhibits the JL property. Clearly a naive application of A to each vector requires O(mn) operat... |

141 | Fast Monte Carlo algorithms for matrices III: Computing a compressed approximate matrix decomposition
- Drineas, Kannan, et al.
(Show Context)
Citation Context ...= P 2 . Here, a rectangular matrix Ψ is a said to be a projection if Ψ T Ψ = (Ψ T Ψ) 2 . 8approximate nearest neighbor searching [5, 6, 7, 8, 9], learning [10, 11, 12], matrix low rank approximation =-=[13, 14, 15, 16, 17, 18, 19, 20]-=-, other linear algebraic operations [21, 22, 23, 24], and many other algorithms and applications, e.g, [25, 26, 27, 28, 29, 30]. 2.2 Classic results, review of known JL distributions The construction ... |

137 | Random Projection in Dimensionality Reduction: Applications to Image and Text
- Bingham, Mannila
- 2001
(Show Context)
Citation Context ... us not only with interesting theoretical results but also with useful practical tools. Experimental papers which use random projections deal with: information retrieval for text documents and images =-=[51]-=-, learning Gaussian mixture models [52], data mining and PCA [53] and [27]to name just a few. Although many consider the usage of random projections, they do not consider the differences between diffe... |

113 | An elementary proof of the johnsonlindenstrauss lemma
- Dasgupta, Gupta
- 1999
(Show Context)
Citation Context ...roceed to give the relevant concentration over Sd−1 which proves the lemma. Their proof, however, can be made significantly simpler by considering a slightly modified construction. Gupta and Dasgupta =-=[31]-=- as well as Frankl and Maehara [32] suggested that each entry in Ψ be chosen uniformly at random from a Gaussian distribution (without orthogonalization). These proofs still relay on the rotational in... |

104 |
Approximate nearest neighbors and the fast Johnson–Lindenstrauss transform
- Ailon, Chazelle
(Show Context)
Citation Context .... . . . . . . . . . . . . a subset of Rd FJLT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . the fast JL transform by Ailon Chazelle =-=[1]-=- FJLTr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . a revised version of the FJLT algorithm, chapter 3 and [2] FWI . . . . . . . . . . . . . . . . . . . . . . . . ... |

104 | Approximate Nearest Neighbor Queries in Fixed Dimensions
- Arya, Mount
- 1993
(Show Context)
Citation Context ... Linear Algebra, a projection matrix P is a square matrix such that P = P 2 . Here, a rectangular matrix Ψ is a said to be a projection if Ψ T Ψ = (Ψ T Ψ) 2 . 8approximate nearest neighbor searching =-=[5, 6, 7, 8, 9]-=-, learning [10, 11, 12], matrix low rank approximation [13, 14, 15, 16, 17, 18, 19, 20], other linear algebraic operations [21, 22, 23, 24], and many other algorithms and applications, e.g, [25, 26, 2... |

99 |
The johnson-lindenstrauss lemma and the sphericity of some graphs
- Frankl, Maehara
- 1988
(Show Context)
Citation Context ...ration over Sd−1 which proves the lemma. Their proof, however, can be made significantly simpler by considering a slightly modified construction. Gupta and Dasgupta [31] as well as Frankl and Maehara =-=[32]-=- suggested that each entry in Ψ be chosen uniformly at random from a Gaussian distribution (without orthogonalization). These proofs still relay on the rotational invariance of the distribution but ar... |

92 | Improved approximation algorithms for large matrices via random projections
- Sarlós
- 2006
(Show Context)
Citation Context ...= P 2 . Here, a rectangular matrix Ψ is a said to be a projection if Ψ T Ψ = (Ψ T Ψ) 2 . 8approximate nearest neighbor searching [5, 6, 7, 8, 9], learning [10, 11, 12], matrix low rank approximation =-=[13, 14, 15, 16, 17, 18, 19, 20]-=-, other linear algebraic operations [21, 22, 23, 24], and many other algorithms and applications, e.g, [25, 26, 27, 28, 29, 30]. 2.2 Classic results, review of known JL distributions The construction ... |

91 | An algorithmic theory of learning: Robust concepts and random projection
- Arriaga, Vempala
(Show Context)
Citation Context ...ion matrix P is a square matrix such that P = P 2 . Here, a rectangular matrix Ψ is a said to be a projection if Ψ T Ψ = (Ψ T Ψ) 2 . 8approximate nearest neighbor searching [5, 6, 7, 8, 9], learning =-=[10, 11, 12]-=-, matrix low rank approximation [13, 14, 15, 16, 17, 18, 19, 20], other linear algebraic operations [21, 22, 23, 24], and many other algorithms and applications, e.g, [25, 26, 27, 28, 29, 30]. 2.2 Cla... |

86 | A replacement for voronoi diagrams of near linear size
- Har-Peled
- 2001
(Show Context)
Citation Context ..., 7, 8, 9], learning [10, 11, 12], matrix low rank approximation [13, 14, 15, 16, 17, 18, 19, 20], other linear algebraic operations [21, 22, 23, 24], and many other algorithms and applications, e.g, =-=[25, 26, 27, 28, 29, 30]-=-. 2.2 Classic results, review of known JL distributions The construction of Johnson and Lindenstrauss is surprisingly simple. They proposed choosing Ψ uniformly at random from the space of projection ... |

68 | Sampling from large matrices: An approach through geometric functional analysis
- Rudelson, Vershynin
(Show Context)
Citation Context ...refined and improved by Kannan, Vempala, Mahoney, Muthukrishnan, and Drineas in a series of papers [14, 15, 16]. The strongest result in this line of work was given recently by Rudelson and Vershinin =-=[54]-=-. From this point on, we refer to the method described in [54] as random sampling. For completeness we recall the relevant theorem. Theorem 8.3.1 (Rudelson, Vershinin [54]). Let M be a d×n matrix with... |

67 |
The Random Projection Method
- Vempala
- 2004
(Show Context)
Citation Context ..., 7, 8, 9], learning [10, 11, 12], matrix low rank approximation [13, 14, 15, 16, 17, 18, 19, 20], other linear algebraic operations [21, 22, 23, 24], and many other algorithms and applications, e.g, =-=[25, 26, 27, 28, 29, 30]-=-. 2.2 Classic results, review of known JL distributions The construction of Johnson and Lindenstrauss is surprisingly simple. They proposed choosing Ψ uniformly at random from the space of projection ... |

61 |
On economic construction of the transitive closure of a directed graph (in russian). english translation in soviet math. dokl
- Arlazarov, Dinic, et al.
- 1975
(Show Context)
Citation Context ...ndermonde and others can be applied to vectors in O(npolylog(n)) operations. Others have focussed on matrix-matrix multiplication. For two n × n binary matrices the historical Four Russians Algorithm =-=[42]-=- (modified in [43]) gives a log factor improvement over the naïve algorithm, i.e, running time of n 3 /log(n). Techniques for saving log factors in real valued matrix-matrix multiplications 50were al... |

42 | E.: Fast dimension reduction using Rademacher series on dual BCH codes
- Ailon, Liberty
- 2000
(Show Context)
Citation Context ...his thesis. (1) JL: a naïve implementation of the Johnson-Lindenstrauss lemma. (2) FJLT: the fast JL transform by Ailon Chazelle [1]. (3) FJLTr: a revised version of the FJLT algorithm, Chapter 3 and =-=[2]-=-. (4) FWI: a projection algorithm that uses the properties of four-wise independent matrices. Chapter 5 and [2]. (5) JL concatenation: a concatenation of several independent projections. Section 5.5 (... |

39 | Experiments with random projections for machine learning
- Fradkin, Madigan
- 2003
(Show Context)
Citation Context ...seful practical tools. Experimental papers which use random projections deal with: information retrieval for text documents and images [51], learning Gaussian mixture models [52], data mining and PCA =-=[53]-=- and [27]to name just a few. Although many consider the usage of random projections, they do not consider the differences between different projection algorithms: the quality of length preservation an... |

37 | Relative-error cur matrix decompositions
- Drineas, Mahoney, et al.
(Show Context)
Citation Context ... Ψ T Ψ = (Ψ T Ψ) 2 . 8approximate nearest neighbor searching [5, 6, 7, 8, 9], learning [10, 11, 12], matrix low rank approximation [13, 14, 15, 16, 17, 18, 19, 20], other linear algebraic operations =-=[21, 22, 23, 24]-=-, and many other algorithms and applications, e.g, [25, 26, 27, 28, 29, 30]. 2.2 Classic results, review of known JL distributions The construction of Johnson and Lindenstrauss is surprisingly simple.... |

36 | On variants of the Johnson–Lindenstrauss lemma
- Matousek
- 2006
(Show Context)
Citation Context ...not aware of its specific origin. 9construction was given by Dimitris Achlioptas [33] who proposed a distribution over matrices Ψ such that Ψ(i, j) ∈ {−1, 0, 1} with constant probabilities. Matousek =-=[34]-=- extended this result to any i.i.d. sub-Gaussian symmetric distributed entries. These proofs relay on a slightly weaker condition which is the independence of the rows of Ψ. Denote by Ψ (i) the i’th r... |

34 | A neuroidal architecture for cognitive computation
- Valiant
- 2000
(Show Context)
Citation Context ...ion matrix P is a square matrix such that P = P 2 . Here, a rectangular matrix Ψ is a said to be a projection if Ψ T Ψ = (Ψ T Ψ) 2 . 8approximate nearest neighbor searching [5, 6, 7, 8, 9], learning =-=[10, 11, 12]-=-, matrix low rank approximation [13, 14, 15, 16, 17, 18, 19, 20], other linear algebraic operations [21, 22, 23, 24], and many other algorithms and applications, e.g, [25, 26, 27, 28, 29, 30]. 2.2 Cla... |

33 | On Approximate Nearest Neighbors in Non-Euclidean Spaces
- Indyk
- 1998
(Show Context)
Citation Context |

27 | Fast monte-carlo algorithms for approximate matrix multiplication
- Drineas, Kannan
- 2001
(Show Context)
Citation Context |

24 |
S.M.: Sampling algorithms for ℓ2 regression and applications
- Drineas, Mahoney, et al.
(Show Context)
Citation Context ... Ψ T Ψ = (Ψ T Ψ) 2 . 8approximate nearest neighbor searching [5, 6, 7, 8, 9], learning [10, 11, 12], matrix low rank approximation [13, 14, 15, 16, 17, 18, 19, 20], other linear algebraic operations =-=[21, 22, 23, 24]-=-, and many other algorithms and applications, e.g, [25, 26, 27, 28, 29, 30]. 2.2 Classic results, review of known JL distributions The construction of Johnson and Lindenstrauss is surprisingly simple.... |

18 |
Somes estimates of norms of random matrices
- Latala
- 2005
(Show Context)
Citation Context ...andom Gaussian matrix, N(i, j) ∼ N(0, 1/ √ d). The entry-wise standard deviation of 1/ √ d is required to have E(‖N‖2 ) = O(1) (for noise distribution other than Gaussian the reader is referred to 67=-=[55]-=-). Finally, we denote by M the matrix M = D + σN. Using the matrix M we are interested in producing a d × k orthogonal matrix V which numerically spans the right singular space of D i.e. the columns o... |

17 | M.W.: Sampling algorithms and coresets for ℓp regression
- Dasgupta, Drineas, et al.
- 2008
(Show Context)
Citation Context ... Ψ T Ψ = (Ψ T Ψ) 2 . 8approximate nearest neighbor searching [5, 6, 7, 8, 9], learning [10, 11, 12], matrix low rank approximation [13, 14, 15, 16, 17, 18, 19, 20], other linear algebraic operations =-=[21, 22, 23, 24]-=-, and many other algorithms and applications, e.g, [25, 26, 27, 28, 29, 30]. 2.2 Classic results, review of known JL distributions The construction of Johnson and Lindenstrauss is surprisingly simple.... |

12 |
P.: Intra- and interpopulation genotype reconstruction from tagging SNPs. Genome Res
- Paschou, Mahoney, et al.
- 2007
(Show Context)
Citation Context |

10 | P.: PCA-correlated SNPs for structure identification in worldwide human populations
- Paschou, Ziv, et al.
- 2007
(Show Context)
Citation Context |

9 | Matrix-vector multiplication in sub-quadratic time (some preprocessing required
- WILLIAMS
- 2007
(Show Context)
Citation Context ...essed at least once, we consider a preprocessing stage. After the preprocessing stage x is given and we seek an algorithm to produce the product Ax as fast as possible. Within this framework Williams =-=[48]-=- showed that an n×n binary matrix can be preprocessed in time O(n2+ɛ ) and subsequently applied to binary vectors in O(n2 /ɛ log 2 n). Williams also extends his result to matrix operations over finite... |

8 | Problems and results in extremal combinatorics—I, Discrete Math
- Alon
- 2003
(Show Context)
Citation Context ...at any set of n points in R d (equipped with the ℓ2 metric) can be embedded into dimension k = Θ(log(n)/ε 2 ) with distortion at most ε using a randomly generated linear mapping Ψ. Moreover Noga Alon =-=[4]-=- showed that this result is essentially tight (in its dependence on n). It is remarkable to notice that the target dimension k is not only logarithmic in the input size (n) but also independent of the... |

6 |
An improved algorithm for boolean matrix multiplication
- Santoro, Urrutia
- 1986
(Show Context)
Citation Context ...rs can be applied to vectors in O(npolylog(n)) operations. Others have focussed on matrix-matrix multiplication. For two n × n binary matrices the historical Four Russians Algorithm [42] (modified in =-=[43]-=-) gives a log factor improvement over the naïve algorithm, i.e, running time of n 3 /log(n). Techniques for saving log factors in real valued matrix-matrix multiplications 50were also found [44]. The... |

5 | S.W.: The Mailman algorithm: A note on matrix–vector multiplication
- Liberty, Zucker
- 2009
(Show Context)
Citation Context ...s of four-wise independent matrices. Chapter 5 and [2]. (5) JL concatenation: a concatenation of several independent projections. Section 5.5 (6) JL + Mailman: implementation of the Mailman algorithm =-=[35]-=- to Achlioptas’s result, Chapter 7. Except for the case where k is Ω(poly(d)) and o((d log(d)) 1/3 ) our results strictly outperform previous algorithms. 18k × d projection matrix Application complex... |

3 |
computation of low-rank matrix approximations
- Fast
- 2007
(Show Context)
Citation Context |

3 |
Faster least squares approximation. TR arXiv:0710.1435, submitted for publication
- Drineas, Mahoney, et al.
- 2007
(Show Context)
Citation Context |

3 |
Nir Ailon, and Amit Singer. Dense fast random projections and lean walsh transforms
- Liberty
- 2008
(Show Context)
Citation Context ... = O((d/k) −1/2 ) Matousek [34] Sparse ±1 entries O(k 2 dη 2 ) ‖x‖ ∞ ≤ η General rule Any matrix ‖x‖ A = O(k −1/2 ) This thesis [2] 4-wise independent matrix O(d log k) ‖x‖ 4 = O(d −1/4 ) This thesis =-=[36]-=- Lean Walsh Transform O(d) ‖x‖ ∞ = O(k −1/2 d −δ ) This thesis Identity copies O(d) ‖x‖ ∞ = O((k log k) −1/2 ) Tab. 2.2: Different distributions for k × d matrices and the set χ ⊂ S d−1 for which they... |

3 |
Extending the four russians’ bound to general matrix multiplication
- Santoro
- 1980
(Show Context)
Citation Context ...d in [43]) gives a log factor improvement over the naïve algorithm, i.e, running time of n 3 /log(n). Techniques for saving log factors in real valued matrix-matrix multiplications 50were also found =-=[44]-=-. These methods are reported to be practically more efficient than naïve implementations. Classic positive results, achieving a polynomial speedup, by Strassen [45] and Coppersmith and Winograd [46] a... |

3 |
On the number of multiplications required for matrix multiplication
- Brockett, Dobkin
- 1976
(Show Context)
Citation Context ...orted to be practically more efficient than naïve implementations. Classic positive results, achieving a polynomial speedup, by Strassen [45] and Coppersmith and Winograd [46] as well as lower bounds =-=[47]-=- for general matrix-matrix multiplications are known. These, however, do not extend to matrix-vector multiplication. We return to matrix-vector operations. Since every entry of the matrix, A, must be ... |

2 |
König and Nicole Tomczak Jaegermann. Projection constants of symmetric spaces and variants of khintchine’s inequality
- Hermann
- 1999
(Show Context)
Citation Context ... |] where P is any single coordinate of k1/2BDx. We follow (almost exactly) a proof by Matousek in [34] where he uses a quantitative version of the Central Limit Theorem by König, Schütt, and Tomczak =-=[40]-=-. Lemma 5.4.1. [König-Schütt-Tomczak] Let z1 . . . zd be independent symmetric random variables with ∑d i=1 E[z2 i ] = 1, let F (t) = Pr[∑ d i=1 zi < t], and let ϕ(t) = 1 ∫ t 2π −∞ e−x2 /2dx. Then for... |

1 |
A randomized algorithm for the approximation of matrices. 74
- Martinsson, Rokhlin, et al.
- 2007
(Show Context)
Citation Context |