Results 1  10
of
23
Fast Sparse Matrix Multiplication
, 2004
"... Let A and B two n n matrices over a ring R (e.g., the reals or the integers) each containing at most m nonzero elements. We present a new algorithm that multiplies A and B using O(m ) algebraic operations (i.e., multiplications, additions and subtractions) over R. The naive matrix multi ..."
Abstract

Cited by 41 (3 self)
 Add to MetaCart
Let A and B two n n matrices over a ring R (e.g., the reals or the integers) each containing at most m nonzero elements. We present a new algorithm that multiplies A and B using O(m ) algebraic operations (i.e., multiplications, additions and subtractions) over R. The naive matrix multiplication algorithm, on the other hand, may need to perform #(mn) operations to accomplish the same task. For , the new algorithm performs an almost optimal number of only n operations. For m the new algorithm is also faster than the best known matrix multiplication algorithm for dense matrices which uses O(n ) algebraic operations. The new algorithm is obtained using a surprisingly straightforward combination of a simple combinatorial idea and existing fast rectangular matrix multiplication algorithms. We also obtain improved algorithms for the multiplication of more than two sparse matrices.
Maximum matchings in planar graphs via Gaussian elimination
 ALGORITHMICA
, 2004
"... We present a randomized algorithm for finding maximum matchings in planar graphs in time O(n ω/2), where ω is the exponent of the best known matrix multiplication algorithm. Since ω < 2.38, this algorithm breaks through the O(n 1.5) barrier for the matching problem. This is the first result of t ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
We present a randomized algorithm for finding maximum matchings in planar graphs in time O(n ω/2), where ω is the exponent of the best known matrix multiplication algorithm. Since ω < 2.38, this algorithm breaks through the O(n 1.5) barrier for the matching problem. This is the first result of this kind for general planar graphs. We also present an algorithm for generating perfect matchings in planar graphs uniformly at random using O(n ω/2) arithmetic operations. Our algorithms are based on the Gaussian elimination approach to maximum matchings introduced in [1].
NC algorithms for comparability graphs, interval graphs, and unique perfect matching
 Proc. 5th Conf. Found. Software Technology and Theor. Comput. Sci., volume 206 of Lect. Notes in Comput. Sci
, 1985
"... Laszlo Lovasz recently posed the following problem: \Is there an NC algorithm for testing if a given graph has a unique perfect matching?" We present suchan algorithm for bipartite graphs. We also give NC algorithms for obtaining a transitive orientation of a comparability graph, and an interva ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Laszlo Lovasz recently posed the following problem: \Is there an NC algorithm for testing if a given graph has a unique perfect matching?" We present suchan algorithm for bipartite graphs. We also give NC algorithms for obtaining a transitive orientation of a comparability graph, and an interval representation of an interval graph. These enable us to obtain an NC algorithm for nding a maximum matching in an incomparability graph. 1
RANDOMIZED Õ(M(V)) ALGORITHMS FOR PROBLEMS IN Matching Theory
, 1997
"... A randomized (Las Vegas) algorithm is given for finding the Gallai–Edmonds decomposition of a graph. Let n denote the number of vertices, and let M(n) denote the number of arithmetic operations for multiplying two n × n matrices. The sequential running time (i.e., number of bit operations) is within ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
A randomized (Las Vegas) algorithm is given for finding the Gallai–Edmonds decomposition of a graph. Let n denote the number of vertices, and let M(n) denote the number of arithmetic operations for multiplying two n × n matrices. The sequential running time (i.e., number of bit operations) is within a polylogarithmic factor of M(n). The parallel complexity is O((log n) 2) parallel time using a number of processors within a polylogarithmic factor of M(n). The same complexity bounds suffice for solving several other problems: (i) finding a minimum vertex cover in a bipartite graph, (ii) finding a minimum X→Y vertex separator in a directed graph, where X and Y are specified sets of vertices, (iii) finding the allowed edges (i.e., edges that occur in some maximum matching) of a graph, and (iv) finding the canonical partition of the vertex set of an elementary graph. The sequential algorithms for problems (i), (ii), and (iv) are Las Vegas, and the algorithm for problem (iii) is Monte Carlo. The new complexity bounds are significantly better than the best previous ones, e.g., using the best value of M(n) currently known, the new sequential running time is O(n2.38) versus the previous best O(n2.5 /(log n)) or more.
Approximating Maximum Weight Matching in Nearlinear Time
"... Given a weighted graph, the maximum weight matching problem (MWM) is to find a set of vertexdisjoint edges with maximum weight. In the 1960s Edmonds showed that MWMs can be found in polynomial time. At present the fastest MWM algorithm, due to Gabow and Tarjan, runs in Õ(m √ n) time, where m and n ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Given a weighted graph, the maximum weight matching problem (MWM) is to find a set of vertexdisjoint edges with maximum weight. In the 1960s Edmonds showed that MWMs can be found in polynomial time. At present the fastest MWM algorithm, due to Gabow and Tarjan, runs in Õ(m √ n) time, where m and n are the number of edges and vertices in the graph. Surprisingly, restricted versions of the problem, such as computing (1 − ɛ)approximate MWMs or finding maximum cardinality matchings, are not known to be much easier (on sparse graphs). The best algorithms for these problems also run in Õ(m √ n) time. In this paper we present the first nearlinear time algorithm for computing (1 − ɛ)approximate MWMs. Specifically, given an arbitrary realweighted graph and ɛ> 0, our algorithm computes such a matching in O(mɛ −2 log 3 n) time. The previous best approximate MWM algorithm with comparable running time could only guarantee a (2/3 − ɛ)approximate solution. In addition, we present a faster algorithm, running in O(m log n log ɛ −1) time, that computes a (3/4−ɛ)approximate MWM.
Algebraic Algorithms for Matching and Matroid Problems
 SIAM JOURNAL ON COMPUTING
, 2009
"... We present new algebraic approaches for two wellknown combinatorial problems: nonbipartite matching and matroid intersection. Our work yields new randomized algorithms that exceed or match the efficiency of existing algorithms. For nonbipartite matching, we obtain a simple, purely algebraic algori ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
We present new algebraic approaches for two wellknown combinatorial problems: nonbipartite matching and matroid intersection. Our work yields new randomized algorithms that exceed or match the efficiency of existing algorithms. For nonbipartite matching, we obtain a simple, purely algebraic algorithm with running time O(n ω) where n is the number of vertices and ω is the matrix multiplication exponent. This resolves the central open problem of Mucha and Sankowski (2004). For matroid intersection, our algorithm has running time O(nr ω−1) for matroids with n elements and rank r that satisfy some natural conditions.
Algebraic structures and algorithms for matching and matroid problems
"... We present new algebraic approaches for several wellknown combinatorial problems, including nonbipartite matching, matroid intersection, and some of their generalizations. Our work yields new randomized algorithms that are the most efficient known. For nonbipartite matching, we obtain a simple, pu ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
We present new algebraic approaches for several wellknown combinatorial problems, including nonbipartite matching, matroid intersection, and some of their generalizations. Our work yields new randomized algorithms that are the most efficient known. For nonbipartite matching, we obtain a simple, purely algebraic algorithm with running time O(n ω) where n is the number of vertices and ω is the matrix multiplication exponent. This resolves the central open problem of Mucha and Sankowski (2004). For matroid intersection, our algorithm has running time O(nr ω−1) for matroids with n elements and rank r that satisfy some natural conditions. This algorithm is based on new algebraic results characterizing the size of a maximum intersection in contracted matroids. Furthermore, the running time of this algorithm is essentially optimal.
On the Representation and Multiplication of Hypersparse Matrices
, 2008
"... Multicore processors are marking the beginning of a new era of computing where massive parallelism is available and necessary. Slightly slower but easy to parallelize kernels are becoming more valuable than sequentially faster kernels that are unscalable when parallelized. In this paper, we focus on ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
Multicore processors are marking the beginning of a new era of computing where massive parallelism is available and necessary. Slightly slower but easy to parallelize kernels are becoming more valuable than sequentially faster kernels that are unscalable when parallelized. In this paper, we focus on the multiplication of sparse matrices (SpGEMM). We first present the issues with existing sparse matrix representations and multiplication algorithms that make them unscalable to thousands of processors. Then, we develop and analyze two new algorithms that overcome these limitations. We consider our algorithms first as the sequential kernel of a scalable parallel sparse matrix multiplication algorithm and second as part of a polyalgorithm for SpGEMM that would execute different kernels depending on the sparsity of the input matrices. Such a sequential kernel requires a new data structure that exploits the hypersparsity of the individual submatrices owned by a single processor after the 2D partitioning. We experimentally evaluate the performance and characteristics of our algorithms and show that they scale significantly better than existing kernels.
Highly Parallel Sparse MatrixMatrix Multiplication
, 2010
"... Generalized sparse matrixmatrix multiplication is a key primitive for many high performance graph algorithms as well as some linear solvers such as multigrid. We present the first parallel algorithms that achieve increasing speedups for an unbounded number of processors. Our algorithms are based on ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Generalized sparse matrixmatrix multiplication is a key primitive for many high performance graph algorithms as well as some linear solvers such as multigrid. We present the first parallel algorithms that achieve increasing speedups for an unbounded number of processors. Our algorithms are based on twodimensional block distribution of sparse matrices where serial sections use a novel hypersparse kernel for scalability. We give a stateoftheart MPI implementation of one of our algorithms. Our experiments show scaling up to thousands of processors on a variety of test scenarios.
Matchings, Matroids and Unimodular Matrices
, 1995
"... We focus on combinatorial problems arising from symmetric and skewsymmetric matrices. For much of the thesis we consider properties concerning the principal submatrices. In particular, we are interested in the property that every nonsingular principal submatrix is unimodular; matrices having this p ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We focus on combinatorial problems arising from symmetric and skewsymmetric matrices. For much of the thesis we consider properties concerning the principal submatrices. In particular, we are interested in the property that every nonsingular principal submatrix is unimodular; matrices having this property are called principally unimodular. Principal unimodularity is a generalization of total unimodularity, and we generalize key polyhedral and matroidal results on total unimodularity. Highlights include a generalization of Hoffman and Kruskal's result on integral polyhedra, a generalization of Tutte's results on regular matroids, and partial results toward a decomposition theorem. Quite separate from the study of principal unimodularity we consider a particular skewsymmetric matrix of indeterminates associated with a graph. This matrix, called the Tutte matrix, was introduced by Tutte to study matchings. By considering the rank of an arbitrary submatrix of the Tutte matrix we disco...