Results 1  10
of
24
FINDING STRUCTURE WITH RANDOMNESS: PROBABILISTIC ALGORITHMS FOR CONSTRUCTING APPROXIMATE MATRIX DECOMPOSITIONS
"... Lowrank matrix approximations, such as the truncated singular value decomposition and the rankrevealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for ..."
Abstract

Cited by 47 (1 self)
 Add to MetaCart
Lowrank matrix approximations, such as the truncated singular value decomposition and the rankrevealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing lowrank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired lowrank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition
FINDING STRUCTURE WITH RANDOMNESS: STOCHASTIC ALGORITHMS FOR CONSTRUCTING APPROXIMATE MATRIX DECOMPOSITIONS
, 2009
"... Lowrank matrix approximations, such as the truncated singular value decomposition and the rankrevealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys recent research which demonstrates that randomization offers a powerful tool for performing l ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
Lowrank matrix approximations, such as the truncated singular value decomposition and the rankrevealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys recent research which demonstrates that randomization offers a powerful tool for performing lowrank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. In particular, these techniques offer a route toward principal component analysis (PCA) for petascale data. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired lowrank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider
Randomized methods for linear constraints: convergence rates and conditioning
 Math. Oper. Res
"... iterated projections, averaged projections, distance to illposedness, metric regularity AMS 2000 Subject Classification: 15A12, 15A39, 65F10, 90C25 We study randomized variants of two classical algorithms: coordinate descent for systems of linear equations and iterated projections for systems of li ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
iterated projections, averaged projections, distance to illposedness, metric regularity AMS 2000 Subject Classification: 15A12, 15A39, 65F10, 90C25 We study randomized variants of two classical algorithms: coordinate descent for systems of linear equations and iterated projections for systems of linear inequalities. Expanding on a recent randomized iterated projection algorithm of Strohmer and Vershynin for systems of linear equations, we show that, under appropriate probability distributions, the linear rates of convergence (in expectation) can be bounded in terms of natural linearalgebraic condition numbers for the problems. We relate these condition measures to distances to illposedness, and discuss generalizations to convex systems under metric regularity assumptions. 1
RANDOMIZED KACZMARZ SOLVER FOR NOISY LINEAR SYSTEMS
"... Abstract. The Kaczmarz method is an iterative algorithm for solving systems of linear equations Ax = b. Theoretical convergence rates for this algorithm were largely unknown until recently when work was done on a randomized version of the algorithm. It was proved that for overdetermined systems, the ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Abstract. The Kaczmarz method is an iterative algorithm for solving systems of linear equations Ax = b. Theoretical convergence rates for this algorithm were largely unknown until recently when work was done on a randomized version of the algorithm. It was proved that for overdetermined systems, the randomized Kaczmarz method converges with expected exponential rate, independent of the number of equations in the system. Here we analyze the case where the system Ax = b is corrupted by noise, so we consider the system Ax ≈ b + r where r is an arbitrary error vector. We prove that in this noisy version, the randomized method reaches an error threshold dependent on the matrix A with the same rate as in the errorfree case. We provide examples showing our results are sharp in the general context. 1.
Fullwaveform inversion from compressively recovered model updates
"... Fullwaveform inversion relies on the collection of large multiexperiment data volumes in combination with a sophisticated backend to create highfidelity inversion results. While improvements in acquisition and inversion have been extremely successful, the current trend of incessantly pushing for ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
Fullwaveform inversion relies on the collection of large multiexperiment data volumes in combination with a sophisticated backend to create highfidelity inversion results. While improvements in acquisition and inversion have been extremely successful, the current trend of incessantly pushing for higher quality models in increasingly complicated regions of the Earth reveals fundamental shortcomings in our ability to handle increasing problem size numerically. Two main culprits can be identified. First, there is the socalled “curse of dimensionality” exemplified by Nyquist’s sampling criterion, which puts disproportionate strain on current acquisition and processing systems as the size and desired resolution increases. Secondly, there is the recent “departure from Moore’s law ” that forces us to lower our expectations to compute ourselves out of this. In this paper, we address this situation by randomized dimensionality reduction, which we adapt from the field of compressive sensing. In this approach, we combine deliberate randomized subsampling with structureexploiting transformdomain sparsity promotion. Our approach is successful because it reduces the size of seismic data volumes without loss of information. With this reduction, we compute Newtonlike updates at the cost of roughly one gradient update for the fullysampled wavefield.
Parallel Coordinate Descent Methods for Big Data Optimization
, 2012
"... In this work we show that randomized (block) coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially separable smooth convex function and a simple separable convex function. The theoretical speedup, as compared to the serial m ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
In this work we show that randomized (block) coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially separable smooth convex function and a simple separable convex function. The theoretical speedup, as compared to the serial method, and referring to the number of iterations needed to approximately solve the problem with high probability, is a simple expression depending on the number of parallel processors and a natural and easily computable measure of separability of the smooth component of the objective function. In the worst case, when no degree of separability is present, there may be no speedup; in the best case, when the problem is separable, the speedup is equal to the number of processors. Our analysis also works in the mode when the number of blocks being updated at each iteration is random, which allows for modeling situations with busy or unreliable processors. We show that our algorithm is able to solve a LASSO problem involving a matrix with 20 billion nonzeros in 2 hours on a large memory node with 24 cores.
Acceleration of Randomized Kaczmarz Method via the JohnsonLindenstrauss Lemma
, 2010
"... The Kaczmarz method is an algorithm for finding the solution to an overdetermined system of linear equations Ax = b by iteratively projecting onto the solution spaces. The randomized versionputforthbyStrohmerandVershyninyieldsprovablyexponentialconvergenceinexpectation, which for highly overdetermin ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
The Kaczmarz method is an algorithm for finding the solution to an overdetermined system of linear equations Ax = b by iteratively projecting onto the solution spaces. The randomized versionputforthbyStrohmerandVershyninyieldsprovablyexponentialconvergenceinexpectation, which for highly overdetermined systems even outperforms the conjugate gradient method. In this article we present a modified version of the randomized Kaczmarz method which at each iteration selects the optimal projection from a randomly chosen set, which in most cases significantly improves the convergence rate. We utilize a JohnsonLindenstrauss dimension reduction technique to keep the runtime on the same order as the original randomized version, adding only extra preprocessing time. We present a series of empirical studies which demonstrate the remarkable acceleration in convergence to the solution using this modified approach. 1
Randomized fullwaveform inversion: a dimenstionalityreduction approach
"... Fullwaveform inversion relies on the collection of large multiexperiment data volumes in combination with a sophisticated backend to create highfidelity inversion results. While improvements in acquisition and inversion have been extremely successful, the current trend of incessantly pushing for ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Fullwaveform inversion relies on the collection of large multiexperiment data volumes in combination with a sophisticated backend to create highfidelity inversion results. While improvements in acquisition and inversion have been extremely successful, the current trend of incessantly pushing for higher quality models in increasingly complicated regions of the Earth reveals fundamental shortcomings in our ability to handle increasing problem sizes numerically. Two main culprits can be identified. First, there is the socalled “curse of dimensionality” exemplified by Nyquist’s sampling criterion, which puts disproportionate strain on current acquisition and processing systems as the size and desired resolution increases. Secondly, there is the recent “departure from Moore’s law ” that forces us to develop algorithms that are amenable to parallelization. In this paper, we discuss different strategies that address these issues via randomized dimensionality reduction.
Efficient leastsquares migration with sparsity promotion
 Presented at the , EAGE, EAGE Technical Program Expanded Abstracts
, 2011
"... Seismic imaging relies on the collection of multiexperimental data volumes in combination with a sophisticated backend to create highfidelity inversion results. While significant improvements have been made in linearized inversion, the current trend of incessantly pushing for higher quality model ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Seismic imaging relies on the collection of multiexperimental data volumes in combination with a sophisticated backend to create highfidelity inversion results. While significant improvements have been made in linearized inversion, the current trend of incessantly pushing for higher quality models in increasingly complicated regions reveals fundamental shortcomings in handling increasing problem sizes numerically. The socalled “curse of dimensionality ” is the main culprit because it leads to an exponential growth in the number of sources and the corresponding number of wavefield simulations required by ‘waveequation ’ migration. We address this issue by reducing the number of sources by a randomized dimensionality reduction technique that combines recent developments in stochastic optimization and compressive sensing. As a result, we replace the current formulations of imaging that rely on all data by a sequence of smaller imaging problems that use the output of the previous inversion as input for the next. Empirically, we find speedups of at least one orderofmagnitude when each reduced experiment is considered theoretically as a separate compressivesensing experiment.