Results 1  10
of
20
Minimum Cuts and Shortest Homologous Cycles
 SYMPOSIUM ON COMPUTATIONAL GEOMETRY
, 2009
"... We describe the first algorithms to compute minimum cuts in surfaceembedded graphs in nearlinear time. Given an undirected graph embedded on an orientable surface of genus g, with two specified vertices s and t, our algorithm computes a minimum (s, t)cut in g O(g) n log n time. Except for the spec ..."
Abstract

Cited by 33 (11 self)
 Add to MetaCart
(Show Context)
We describe the first algorithms to compute minimum cuts in surfaceembedded graphs in nearlinear time. Given an undirected graph embedded on an orientable surface of genus g, with two specified vertices s and t, our algorithm computes a minimum (s, t)cut in g O(g) n log n time. Except for the special case of planar graphs, for which O(n log n)time algorithms have been known for more than 20 years, the best previous time bounds for finding minimum cuts in embedded graphs follow from algorithms for general sparse graphs. A slight generalization of our minimumcut algorithm computes a minimumcost subgraph in every Z2homology class. We also prove that finding a minimumcost subgraph homologous to a single input cycle is NPhard.
Homology flows, cohomology cuts
 ACM SYMPOSIUM ON THEORY OF COMPUTING
, 2009
"... We describe the first algorithms to compute maximum flows in surfaceembedded graphs in nearlinear time. Specifically, given an undirected graph embedded on an orientable surface of genus g, with two specified vertices s and t, we can compute a maximum (s, t)flow in O(g 7 n log 2 n log 2 C) time fo ..."
Abstract

Cited by 30 (10 self)
 Add to MetaCart
(Show Context)
We describe the first algorithms to compute maximum flows in surfaceembedded graphs in nearlinear time. Specifically, given an undirected graph embedded on an orientable surface of genus g, with two specified vertices s and t, we can compute a maximum (s, t)flow in O(g 7 n log 2 n log 2 C) time for integer capacities that sum to C, or in (g log n) O(g) n time for real capacities. Except for the special case of planar graphs, for which an O(n log n)time algorithm has been known for 20 years, the best previous time bounds for maximum flows in surfaceembedded graphs follow from algorithms for general sparse graphs. Our key insight is to optimize the relative homology class of the flow, rather than directly optimizing the flow itself. A dual formulation of our algorithm computes the minimumcost cycle or circulation in a given (real or integer) homology class.
Maximum matchings in planar graphs via Gaussian elimination
 ALGORITHMICA
, 2004
"... We present a randomized algorithm for finding maximum matchings in planar graphs in time O(n ω/2), where ω is the exponent of the best known matrix multiplication algorithm. Since ω < 2.38, this algorithm breaks through the O(n 1.5) barrier for the matching problem. This is the first result of t ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
We present a randomized algorithm for finding maximum matchings in planar graphs in time O(n ω/2), where ω is the exponent of the best known matrix multiplication algorithm. Since ω < 2.38, this algorithm breaks through the O(n 1.5) barrier for the matching problem. This is the first result of this kind for general planar graphs. We also present an algorithm for generating perfect matchings in planar graphs uniformly at random using O(n ω/2) arithmetic operations. Our algorithms are based on the Gaussian elimination approach to maximum matchings introduced in [1].
A linear work, O(n^1/6) time, parallel algorithm for solving planar Laplacians
"... We present a linear work parallel iterative algorithm for solving linear systems involving Laplacians of planar graphs. In particular, if Ax = b, where A is the Laplacian of any planar graph with n nodes, the algorithm produces a vector ¯x such that x − ¯xA ≤ ɛ, in O(n 1/6+c log(1/ɛ)) parallel t ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
We present a linear work parallel iterative algorithm for solving linear systems involving Laplacians of planar graphs. In particular, if Ax = b, where A is the Laplacian of any planar graph with n nodes, the algorithm produces a vector ¯x such that x − ¯xA ≤ ɛ, in O(n 1/6+c log(1/ɛ)) parallel time, doing O(n log(1/ɛ)) work, where c is any positive constant. One of the key ingredients of the solver, is an O(nk log 2 k) work, O(k log n) time, parallel algorithm for decomposing any embedded planar graph into components of size O(k) that are delimited by O(n / √ k) boundary edges. The result also applies to symmetric diagonally dominant matrices of planar structure.
The Complexity of the Algebraic Eigenproblem
, 1998
"... The eigenproblem for an nbyn matrix A is the problem of the approximation (within a relative error bound 2 \Gammab ) of all the eigenvalues of the matrix A and computing the associated eigenspaces of all these eigenvalues. We show that the arithmetic complexity of this problem is bounded by O(n ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The eigenproblem for an nbyn matrix A is the problem of the approximation (within a relative error bound 2 \Gammab ) of all the eigenvalues of the matrix A and computing the associated eigenspaces of all these eigenvalues. We show that the arithmetic complexity of this problem is bounded by O(n 3 + (n log 2 n) log b). If the characteristic and minimum polynomials of the matrix A coincide with each other (which is the case for generic matrices of all classes of general and special matrices that we consider), then the latter deterministic cost bound can be replaced by the randomized bound O(KA (2n) + n 2 + (n log 2 n) log b) where KA (2n) denotes the cost of the computation of the 2n \Gamma 1 vectors A i v, i = 1; : : : ; 2n \Gamma 1, maximized over all ndimensional vectors v; KA (2n) = O(M(n) log n), for M(n) = o(n 2:376 ) denoting the arithmetic complexity of n \Theta n matrix multiplication. This bound on the complexity of the eigenproblem is optimal up to a logar...
Algebraic algorithms
"... This article, along with [Elkadi and Mourrain 1996], explain the correlation between residue theory and the Dixon matrix, which yields an alternative method for studying and approximating all common solutions. In 1916, Macaulay [1916] constructed a matrix whose determinant is a multiple of the class ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This article, along with [Elkadi and Mourrain 1996], explain the correlation between residue theory and the Dixon matrix, which yields an alternative method for studying and approximating all common solutions. In 1916, Macaulay [1916] constructed a matrix whose determinant is a multiple of the classical resultant for n homogeneous polynomials in n variables. The Macaulay matrix si16 multaneously generalizes the Sylvester matrix and the coefficient matrix of a system of linear equations [Kapur and Lakshman Y. N. 1992]. As the Dixon formulation, the Macaulay determinant is a multiple of the resultant. Macaulay, however, proved that a certain minor of his matrix divides the matrix determinant so as to yield the exact resultant in the case of generic homogeneous polynomials. Canny [1990] has invented a general method that perturbs any polynomial system and extracts a nontrivial projection operator. Using recent results pertaining to sparse polynomial systems [Gelfand et al. 1994, Sturmfels 1991], a matrix formula for computing the sparse resultant of n + 1 polynomials in n variables was given by Canny and Emiris [1993] and consequently improved in [Canny and Pedersen 1993, Emiris and Canny 1995]. The determinant of the sparse resultant matrix, like the Macaulay and Dixon matrices, only yields a projection operation, not the exact resultant. Here, sparsity means that only certain monomials in each of the n + 1 polynomials have nonzero coefficients. Sparsity is measured in geometric terms, namely, by the Newton polytope
Transformations of Matrix Structures Work Again
"... In [P90] we proposed to employ Vandermonde and Hankel multipliers to transform into each other the matrix structures of Toeplitz, Hankel, Vandermonde and Cauchy types as a means of extending any successful algorithm for the inversion of matrices having one of these structures to inverting the matric ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
In [P90] we proposed to employ Vandermonde and Hankel multipliers to transform into each other the matrix structures of Toeplitz, Hankel, Vandermonde and Cauchy types as a means of extending any successful algorithm for the inversion of matrices having one of these structures to inverting the matrices with the structures of the three other types. Surprising power of this approach has been demonstrated in a number of works, which culminated in ingeneous numerically stable algorithms that approximated the solution of a nonsingular Toeplitz linear system in nearly linear (versus previuosly cubic) arithmetic time. We first revisit this powerful method, covering it comprehensively, and then specialize it to yield a similar acceleration of the known algorithms for computations with matrices having structures of Vandermonde or Cauchy types. In particular we arrive at numerically stable approximate multipoint polynomial evaluation and interpolation in nearly linear time, by using O(bn log h n)flopswhereh =1for evaluation, h =2forinterpolation,and2 −b is the relative norm of the approximation errors.
Combinatorial and algebraic tools for optimal multilevel algorithms
, 2007
"... This dissertation presents combinatorial and algebraic tools that enable the design of the first linear work parallel iterative algorithm for solving linear systems involving Laplacian matrices of planar graphs. The major departure of this work from prior suboptimal and inherently sequential approac ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
This dissertation presents combinatorial and algebraic tools that enable the design of the first linear work parallel iterative algorithm for solving linear systems involving Laplacian matrices of planar graphs. The major departure of this work from prior suboptimal and inherently sequential approaches is centered around: (i) the partitioning of planar graphs into fixed size pieces that share small boundaries, by means of a local ”bottomup ” approach that improves the customary ”topdown ” approach of recursive bisection, (ii) the replacement of monolithic global preconditioners by graph approximations that are built as aggregates of miniature preconditioners. In addition, we present extensions to the theory and analysis of Steiner tree preconditioners. We construct more general Steiner graphs that lead to natural linear time solvers for classes of graphs that are known a priori to have certain structural properties. We also present a graphtheoretic approach to classical algebraic multigrid algorithms. We show that their design can be
Randomized Preprocessing of Homogeneous Linear Systems
, 2009
"... Our randomized preprocessing enables pivotingfree and orthogonalizationfree solution of homogeneous linear systems of equations. In the case of Toeplitz inputs, we decrease the solution time from quadratic to nearly linear, and our tests show dramatic decrease of the CPU time as well. We prove num ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Our randomized preprocessing enables pivotingfree and orthogonalizationfree solution of homogeneous linear systems of equations. In the case of Toeplitz inputs, we decrease the solution time from quadratic to nearly linear, and our tests show dramatic decrease of the CPU time as well. We prove numerical stability of our randomized algorithms and extend our approach to solving nonsingular linear systems, inversion and generalized (Moore–Penrose) inversion of general and structured matrices by means of Newton’s iteration, approximation of a matrix by a nearby matrix that has a smaller rank or a smaller displacement rank, matrix eigensolving, and rootfinding for polynomial and secular equations. Some byproducts and extensions of our study can be of independent technical intersest, e.g., our extensions of the Sherman–Morrison– Woodbury formula for matrix inversion, our estimates for the condition number of randomized matrix products, preprocessing via augmentation, and the link of preprocessing to aggregation. Key words: Linear systems of equations, Randomized preprocessing, Conditioning
Fast Approximation Algorithms for Computations with Cauchy Matrices, Polynomials and Rational Functions
 Proc. of the Ninth International Computer Science Symposium in Russia (CSR’2014
"... The papers [MRT05], [CGS07], [XXG12], and [XXCB14] combine the techniques of the Fast Multipole Method of [GR87], [CGR98] with the transformations of matrix structures, traced back to [P90]. The resulting numerically stable algorithms approximate the solutions of Toeplitz, Hankel, Toeplitzlike, and ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
The papers [MRT05], [CGS07], [XXG12], and [XXCB14] combine the techniques of the Fast Multipole Method of [GR87], [CGR98] with the transformations of matrix structures, traced back to [P90]. The resulting numerically stable algorithms approximate the solutions of Toeplitz, Hankel, Toeplitzlike, and Hankellike linear systems of equations in nearly linear arithmetic time, versus the classical cubic time and the quadratic time of the previous advanced algorithms. We extend this progress to decrease the arithmetic time of the known numerical algorithms from quadratic to nearly linear for computations with a large class of matrices that have structure of Cauchy or Vandermonde type and for the evaluation and interpolation of polynomials and rational functions. We detail and analyze the new algorithms, and in [Pa] we extend them further.