Results 1  10
of
259
Extrapolation Methods for Accelerating PageRank Computations
 In Proceedings of the Twelfth International World Wide Web Conference
, 2003
"... We present a novel algorithm for the fast computation of PageRank, a hyperlinkbased estimate of the "importance" of Web pages. The original PageRank algorithm uses the Power Method to compute successive iterates that converge to the principal eigenvector of the Markov matrix representing ..."
Abstract

Cited by 135 (13 self)
 Add to MetaCart
We present a novel algorithm for the fast computation of PageRank, a hyperlinkbased estimate of the "importance" of Web pages. The original PageRank algorithm uses the Power Method to compute successive iterates that converge to the principal eigenvector of the Markov matrix representing the Web link graph. The algorithm presented here, called Quadratic Extrapolation, accelerates the convergence of the Power Method by periodically subtracting off estimates of the nonprincipal eigenvectors from the current iterate of the Power Method. In Quadratic Extrapolation, we take advantage of the fact that the first eigenvalueof a Markov matrix is known to be 1 to compute the nonprincipal eigenvectorsusing successiveiterates of the Power Method. Empirically, we show that using Quadratic Extrapolation speeds up PageRank computation by 50300% on a Web graph of 80 million nodes, with minimal overhead.
Reduced basis approximation and a posteriori error estimation for parametrized partial differential equations. Version 1.0, Copyright MIT
, 2006
"... reduced basis approximation and a posteriori error estimation for linear functional outputs of affinely parametrized elliptic coercive partial differential equations. The essential ingredients are (primaldual) Galerkin projection onto a lowdimensional space associated with a smooth “parametric ..."
Abstract

Cited by 85 (24 self)
 Add to MetaCart
reduced basis approximation and a posteriori error estimation for linear functional outputs of affinely parametrized elliptic coercive partial differential equations. The essential ingredients are (primaldual) Galerkin projection onto a lowdimensional space associated with a smooth “parametric manifold”—dimension reduction; efficient and effective greedy sampling methods for identification of optimal and numerically stable approximations—rapid convergence; a posteriori error estimation procedures—rigorous and sharp bounds for the linearfunctional outputs of interest; and OfflineOnline computational decomposition strategies—minimum marginal cost for high performance in the realtime/embedded (e.g., parameterestimation, con
Grid adaptation for functional outputs: application to twodimensional inviscid flows
 J. Comput. Phys
"... www.elsevier.com/locate/jcp ..."
Smoothed analysis of the condition numbers and growth factors of matrices
 SIAM J. Matrix Anal. Appl
, 2002
"... Let A be an arbitrary matrix and let A be a slight random perturbation of A. We prove that it is unlikely that A has large condition number. Using this result, we prove it is unlikely that A has large growth factor under Gaussian elimination without pivoting. By combining these results, we show that ..."
Abstract

Cited by 41 (3 self)
 Add to MetaCart
Let A be an arbitrary matrix and let A be a slight random perturbation of A. We prove that it is unlikely that A has large condition number. Using this result, we prove it is unlikely that A has large growth factor under Gaussian elimination without pivoting. By combining these results, we show that the smoothed precision necessary to solve Ax = b, for any b, using Gaussian elimination without pivoting is logarithmic. Moreover, when A is an allzero square matrix, our results significantly improve the averagecase analysis of Gaussian elimination without pivoting performed by Yeung and Chan (SIAM J. Matrix Anal. Appl., 1997). Partially supported by NSF grant CCR0112487
A NewtonCG augmented Lagrangian method for semidefinite programming
 SIAM J. Optim
"... Abstract. We consider a NewtonCG augmented Lagrangian method for solving semidefinite programming (SDP) problems from the perspective of approximate semismooth Newton methods. In order to analyze the rate of convergence of our proposed method, we characterize the Lipschitz continuity of the corresp ..."
Abstract

Cited by 33 (7 self)
 Add to MetaCart
Abstract. We consider a NewtonCG augmented Lagrangian method for solving semidefinite programming (SDP) problems from the perspective of approximate semismooth Newton methods. In order to analyze the rate of convergence of our proposed method, we characterize the Lipschitz continuity of the corresponding solution mapping at the origin. For the inner problems, we show that the positive definiteness of the generalized Hessian of the objective function in these inner problems, a key property for ensuring the efficiency of using an inexact semismooth NewtonCG method to solve the inner problems, is equivalent to the constraint nondegeneracy of the corresponding dual problems. Numerical experiments on a variety of large scale SDPs with the matrix dimension n up to 4, 110 and the number of equality constraints m up to 2, 156, 544 show that the proposed method is very efficient. We are also able to solve the SDP problem fap36 (with n = 4, 110 and m = 1, 154, 467) in the Seventh DIMACS Implementation Challenge much more accurately than previous attempts.
Inverse LittlewoodOfford theorems and the condition number of random discrete matrices
 Annals of Mathematics
"... Abstract. Consider a random sum η1v1 +... + ηnvn, where η1,..., ηn are i.i.d. random signs and v1,..., vn are integers. The LittlewoodOfford problem asks to maximize concentration probabilities such as P(η1v1+...+ηnvn = 0) subject to various hypotheses on the v1,..., vn. In this paper we develop an ..."
Abstract

Cited by 29 (13 self)
 Add to MetaCart
Abstract. Consider a random sum η1v1 +... + ηnvn, where η1,..., ηn are i.i.d. random signs and v1,..., vn are integers. The LittlewoodOfford problem asks to maximize concentration probabilities such as P(η1v1+...+ηnvn = 0) subject to various hypotheses on the v1,..., vn. In this paper we develop an inverse LittlewoodOfford theory (somewhat in the spirit of Freiman’s inverse theory) in additive combinatorics, which starts with the hypothesis that a concentration probability is large, and concludes that almost all of the v1,..., vn are efficiently contained in a generalized arithmetic progression. As an application we give a new bound on the magnitude of the least singular value of a random Bernoulli matrix, which in turn provides upper tail estimates on the condition number. 1.
OPTIMIZATION AND PERFORMANCE MODELING OF STENCIL COMPUTATIONS ON MODERN MICROPROCESSORS
"... Stencilbased kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of tre ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
Stencilbased kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cacheaware algorithms as well as cacheoblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitlymanaged memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cacheblocking optimizations. We also show that a cacheaware implementation is significantly faster than a cacheoblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88 % of algorithmic peak while the best competing cachebased processor only achieves 54 % of algorithmic peak performance.
Decentralized estimation and control of graph connectivity in mobile sensor networks
 in American Control Conference
, 2008
"... Abstract — The ability of a robot team to reconfigure itself is useful in many applications: for metamorphic robots to change shape, for swarm motion towards a goal, for biological systems to avoid predators, or for mobile buoys to clean up oil spills. In many situations, auxiliary constraints, such ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
Abstract — The ability of a robot team to reconfigure itself is useful in many applications: for metamorphic robots to change shape, for swarm motion towards a goal, for biological systems to avoid predators, or for mobile buoys to clean up oil spills. In many situations, auxiliary constraints, such as connectivity between team members or limits on the maximum hopcount, must be satisfied during reconfiguration. In this paper, we show that both the estimation and control of the graph connectivity can be accomplished in a decentralized manner. We describe a decentralized estimation procedure that allows each agent to track the algebraic connectivity of a timevarying graph. Based on this estimator, we further propose a decentralized gradient controller for each agent to maintain global connectivity during motion. I.
Recurrent motions within plane Couette turbulence
 Journal of Fluid Mechanics
"... We describe accurate computations of threedimensional periodic and relative periodic motions within plane Couette turbulence at Re = 400. To ensure that the computed solutions are true solutions of the NavierStokes equations, careful attention is paid to time discretization errors and to spatial r ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
We describe accurate computations of threedimensional periodic and relative periodic motions within plane Couette turbulence at Re = 400. To ensure that the computed solutions are true solutions of the NavierStokes equations, careful attention is paid to time discretization errors and to spatial resolution. All the computed solutions are linearly unstable. While direct numerical simulation helps us understand the statistics of turbulent fluid flows, elucidation of the geometry of turbulent flows in phase space requires the computation of steady states, traveling waves, periodic motions, and close recurrences. The computed solutions are used as a basis to discuss the manner in which the geometry of turbulent dynamics in phase space can be understood. The method used for computing these solutions is described in detail.
Underdetermined Blind Source Separation Using A Probabilistic Source Sparsity Model
 In 2nd International Workshop on Independent Component Analysis and Blind Signal Separation
, 2001
"... Blind source separation consists of recovering # source signals from # measurements that are an unknown function of the sources. In solving the underdetermined (###) linear problem three stages can be identified: to represent the signals in an appropriate domain, to estimate the mixing matrix, and t ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
Blind source separation consists of recovering # source signals from # measurements that are an unknown function of the sources. In solving the underdetermined (###) linear problem three stages can be identified: to represent the signals in an appropriate domain, to estimate the mixing matrix, and to invert the linear problem to estimate the sources. As a consequence of having more degrees of freedom than constraints, the inverse problem has an infinite number of solutions. To choose the "best" solution, additional constraints have to be imposed on the basis of some performance criterion or previous knowledge. In this communication we present a method that choose the "best" demixing matrix in a sample by sample basis by using some previous knowledge of the statistics of the sources. The behaviour of the estimator is compared to the global pseudo inverse approach and with other local heuristic methods by means of Montecarlo simulations.