Results 1 
9 of
9
Fast bMatching via Sufficient Selection Belief Propagation
"... This article describes scalability enhancements to a previously established belief propagation algorithm that solves bipartite maximum weight bmatching. The previous algorithm required O(V  + E) space and O(V E) time, whereas we apply improvements to reduce the space to O(V ) and thetimet ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
This article describes scalability enhancements to a previously established belief propagation algorithm that solves bipartite maximum weight bmatching. The previous algorithm required O(V  + E) space and O(V E) time, whereas we apply improvements to reduce the space to O(V ) and thetimetoO(V  2.5) in the expected case (though worst case time is still O(V E)). The space improvement is most significant in cases where edge weights are determined by a function of node descriptors, such as a distance or kernel function. In practice, we demonstrate maximum weight bmatchings to be solvable on graphs with hundreds of millions of edges in only a few hours of compute time on a modern personal computer without parallelization, whereas neither the memory nor the time requirement of previously known algorithms would have allowed graphs of this scale. 1
Linear Programming in the Semistreaming Model with Application to the Maximum Matching Problem
, 2012
"... In this paper we study linearprogramming based approaches to the maximum matching problem in the semistreaming model. In this model edges are presented sequentially, possibly in an adversarial order, and we are only allowed to use a small space. The allowed space is near linear in the number of ve ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
In this paper we study linearprogramming based approaches to the maximum matching problem in the semistreaming model. In this model edges are presented sequentially, possibly in an adversarial order, and we are only allowed to use a small space. The allowed space is near linear in the number of vertices (and sublinear in the number of edges) of the input graph. The semistreaming model is relevant in the context of processing of very large graphs. In recent years, there have been several new and exciting results in the semistreaming model. However broad techniques such as linear programming have not been adapted to this model. In this paper we present several techniques to adapt and optimize linearprogramming based approaches in the semistreaming model. We use the maximum matching problem as a foil to demonstrate the e ectiveness of adapting such tools in this model. As a consequence we improve almost all previous results on the semistreaming maximum matching problem. We also prove new results on interesting variants.
Efficient algorithms for maximum weight matchings in general graphs with small edge weights
 in: Proceedings 23rd ACMSIAM Symposium on Discrete Algorithms (SODA
"... Let G = (V, E) be a graph with positive integral edge weights. Our problem is to find a matching of maximum weight in G. We present a simple iterative algorithm for this problem that uses a maximum cardinality matching algorithm as a subroutine. Using the current fastest maximum cardinality matching ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Let G = (V, E) be a graph with positive integral edge weights. Our problem is to find a matching of maximum weight in G. We present a simple iterative algorithm for this problem that uses a maximum cardinality matching algorithm as a subroutine. Using the current fastest maximum cardinality matching algorithms, we solve the maximum weight matching problem in O(W √ nm logn(n 2 /m)) time, or in O(W n ω) time with high probability, where n = V , m = E, W is the largest edge weight, and ω < 2.376 is the exponent of matrix multiplication. In relatively dense graphs, our algorithm performs better than all existing algorithms with W = o(log 1.5 n). Our technique hinges on exploiting Edmonds ’ matching polytope and its dual. 1
Scaling algorithms for approximate and exact maximum weight matching
, 2011
"... The maximum cardinality and maximum weight matching problems can be solved in time Õ(m √ n), a bound that has resisted improvement despite decades of research. (Here m and n are the number of edges and vertices.) In this article we demonstrate that this “m √ n barrier ” is extremely fragile, in the ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The maximum cardinality and maximum weight matching problems can be solved in time Õ(m √ n), a bound that has resisted improvement despite decades of research. (Here m and n are the number of edges and vertices.) In this article we demonstrate that this “m √ n barrier ” is extremely fragile, in the following sense. For any ɛ> 0, we give an algorithm that computes a (1 − ɛ)approximate maximum weight matching in O(mɛ −1 log ɛ −1) time, that is, optimal linear time for any fixed ɛ. Our algorithm is dramatically simpler than the best exact maximum weight matching algorithms on general graphs and should be appealing in all applications that can tolerate a negligible relative error. Our second contribution is a new exact maximum weight matching algorithm for integerweighted bipartite graphs that runs in time O(m √ n log N). This improves on the O(Nm √ n)time and O(m √ n log(nN))time algorithms known since the mid 1980s, for 1 ≪ log N ≪ log n. Here N is the maximum integer edge weight. 1
Matching with our eyes closed
 In Symposium on the Foundations of Computer Science (FOCS). IEEE
, 2012
"... Abstract—Motivated by an application in kidney exchange, we study the following querycommit problem: we are given the set of vertices of a nonbipartite graph G. The set of edges in this graph are not known ahead of time. We can query any pair of vertices to determine if they are adjacent. If the q ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—Motivated by an application in kidney exchange, we study the following querycommit problem: we are given the set of vertices of a nonbipartite graph G. The set of edges in this graph are not known ahead of time. We can query any pair of vertices to determine if they are adjacent. If the queried edge exists, we are committed to match the two endpoints. Our objective is to maximize the size of the matching. This restriction in the amount of information available to the algorithm constraints us to implement myopic, greedylike algorithms. A simple deterministic greedy algorithm achieves a factor 1/2 which is tight for deterministic algorithms. An important open question in this direction is to give a randomized greedy algorithm that has a significantly better approximation factor. This question was first asked almost 20 years ago by Dyer and Frieze [9] where they showed that a natural randomized strategy of picking edges uniformly at random doesn’t help and has an approximation factor of 1/2 + o(1). They left it as an open question to devise a better randomized greedy algorithm. In subsequent work, Aronson, Dyer, Frieze, and Suen [2] gave a different randomized greedy algorithm and showed that it attains a factor 0.5 + ǫ where ǫ is 0.0000025. In this paper we propose and analyze a new randomized greedy algorithm for finding a large matching in a general graph and use it to solve the query commit problem mentioned above. We show that our algorithm attains a factor of at least 0.56, a significant improvement over 0.50000025. We also show that no randomized algorithm can have an approximation factor better than 0.7916 for the query commit problem. For another large and interesting class of randomized algorithms that we call vertexiterative algorithms, we show that no vertexiterative algorithm can have an approximation factor better than 0.75.
Learning with DegreeBased Subgraph Estimation
"... Networks and their topologies are critical to nearly every aspect of modern life, with social networks governing human interactions and computer networks governing global informationflow. Network behavior is inherently structural, and thus modeling data from networks benefits from explicitly modeli ..."
Abstract
 Add to MetaCart
Networks and their topologies are critical to nearly every aspect of modern life, with social networks governing human interactions and computer networks governing global informationflow. Network behavior is inherently structural, and thus modeling data from networks benefits from explicitly modeling structure. This thesis covers methods for and analysis of machine learning from network data while explicitly modeling one important measure of structure: degree. Central to this work is a procedure for exact maximum likelihood estimation of a distribution over graph structure, where the distribution factorizes into edgelikelihoods for each pair of nodes and degreelikelihoods for each node. This thesis provides a novel method for exact estimation of the maximum likelihood edge structure under the distribution. The algorithm solves the optimization by constructing an augmented graph containing, in addition to the original nodes, auxiliary nodes whose edges encode the degree potentials. The exact solution is then recoverable by finding the maximum weight bmatching on the augmented graph, a wellstudied combinatorial optimization. To solve the combinatorial optimization, this thesis focuses in particular on a belief propagationbased approach to finding the optimal bmatching and provides a novel proof of convergence for belief propagation on the loopy graphical model representing the bmatching objective. Additionally,
A NearLinear Time εApproximation Algorithm for Geometric Bipartite Matching ∗
"... For point sets A, B ⊂ R d, A  = B  = n, and for a parameter ε> 0, we present a Monte Carlo algorithm that computes, in O(npoly(log n, 1/ε)) time, an εapproximate perfect matching of A and B under any Lpnorm with high probability; the previously best known algorithm takes Ω(n 3/2) time. We ap ..."
Abstract
 Add to MetaCart
For point sets A, B ⊂ R d, A  = B  = n, and for a parameter ε> 0, we present a Monte Carlo algorithm that computes, in O(npoly(log n, 1/ε)) time, an εapproximate perfect matching of A and B under any Lpnorm with high probability; the previously best known algorithm takes Ω(n 3/2) time. We approximate the Lpnorm using a distance function, d(·, ·) based on a randomly shifted quadtree. The algorithm iteratively generates an approximate minimumcost augmenting path under d(·, ·) in time proportional, within a polylogarithmic factor, to the length of the path. We show that the total length of the augmenting paths generated by the algorithm is O((n/ε) log n), implying that the running time of our algorithm is O(npoly(log n, 1/ε)).