Results 1 
9 of
9
Approximating Maximum Weight Matching in Nearlinear Time
"... Given a weighted graph, the maximum weight matching problem (MWM) is to find a set of vertexdisjoint edges with maximum weight. In the 1960s Edmonds showed that MWMs can be found in polynomial time. At present the fastest MWM algorithm, due to Gabow and Tarjan, runs in Õ(m √ n) time, where m and n ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Given a weighted graph, the maximum weight matching problem (MWM) is to find a set of vertexdisjoint edges with maximum weight. In the 1960s Edmonds showed that MWMs can be found in polynomial time. At present the fastest MWM algorithm, due to Gabow and Tarjan, runs in Õ(m √ n) time, where m and n are the number of edges and vertices in the graph. Surprisingly, restricted versions of the problem, such as computing (1 − ɛ)approximate MWMs or finding maximum cardinality matchings, are not known to be much easier (on sparse graphs). The best algorithms for these problems also run in Õ(m √ n) time. In this paper we present the first nearlinear time algorithm for computing (1 − ɛ)approximate MWMs. Specifically, given an arbitrary realweighted graph and ɛ> 0, our algorithm computes such a matching in O(mɛ −2 log 3 n) time. The previous best approximate MWM algorithm with comparable running time could only guarantee a (2/3 − ɛ)approximate solution. In addition, we present a faster algorithm, running in O(m log n log ɛ −1) time, that computes a (3/4−ɛ)approximate MWM.
Local Distributed Decision
 In FOCS 2011
"... A central theme in distributed network algorithms concerns understanding and coping with the issue of locality. Despite considerable progress, research efforts in this direction have not yet resulted in a solid basis in the form of a fundamental computational complexity theory for locality. Inspired ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
A central theme in distributed network algorithms concerns understanding and coping with the issue of locality. Despite considerable progress, research efforts in this direction have not yet resulted in a solid basis in the form of a fundamental computational complexity theory for locality. Inspired by sequential complexity theory, we focus on a complexity theory for distributed decision problems. In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. We consider the standard LOCAL model of computation and define LD(t) (for local decision) as the class of decision problems that can be solved in t communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class BPLD(t, p, q), containing all languages for which there exists a randomized algorithm that runs in t rounds, accepts correct instances with probability at least p and rejects incorrect ones with probability at least q. We show that p 2 +q = 1 is a threshold for the containment of LD(t) in BPLD(t, p, q). More precisely, we show that there exists a language that does not belong to LD(t) for any t = o(n) but does belong to BPLD(0, p, q) for any p, q ∈ (0, 1] such that p 2 +q ≤ 1. On the other hand, we show that, restricted to
Distributed Fractional Packing and Maximum Weighted bMatching via TailRecursive Duality
"... Abstract. We present efficient distributed δapproximation algorithms for fractional packing and maximum weighted bmatching in hypergraphs, where δ is the maximum number of packing constraints in which a variable appears (for maximum weighted bmatching δ is the maximum edge degree — for graphs δ = ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Abstract. We present efficient distributed δapproximation algorithms for fractional packing and maximum weighted bmatching in hypergraphs, where δ is the maximum number of packing constraints in which a variable appears (for maximum weighted bmatching δ is the maximum edge degree — for graphs δ = 2). (a) For δ = 2 the algorithm runs in O(log m) rounds in expectation and with high probability. (b) For general δ, the algorithm runs in O(log 2 m) rounds in expectation and with high probability. 1 Background and results Given a weight vector w ∈ IR m +, a coefficient matrix A ∈ IR n×m and a vector b ∈ IR n +, the fractional packing problem is to compute a vector x ∈ IR m + to maximize ∑m j=1 wjxj and at the same time meet all the constraints ∑m j=1 Aijxj ≤ bi (∀i = 1... n). We use δ to denote the maximum number of packing constraints in which a variable appears, that is, δ = maxj {i  Aij ̸ = 0}. In the centralized setting, fractional packing
Social Content Matching in MapReduce
"... Matching problems are ubiquitous. They occur in economic markets, labor markets, internet advertising, and elsewhere. In this paper we focus on an application of matching for social media. Our goal is to distribute content from information suppliers to information consumers. We seek to maximize the ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Matching problems are ubiquitous. They occur in economic markets, labor markets, internet advertising, and elsewhere. In this paper we focus on an application of matching for social media. Our goal is to distribute content from information suppliers to information consumers. We seek to maximize the overall relevance of the matched content from suppliers to consumers while regulating the overall activity, e.g., ensuring that no consumer is overwhelmed with data and that all suppliers have chances to deliver their content. We propose two matching algorithms, GreedyMR and StackMR, geared for the MapReduce paradigm. Both algorithms have provable approximation guarantees, and in practice they produce highquality solutions. While both algorithms scale extremely well, we can show that StackMR requires only a polylogarithmic number of MapReduce steps, making it an attractive option for applications with very large datasets. We experimentally show the tradeoffs between quality and efficiency of our solutions on two large datasets coming from realworld socialmedia web sites. 1.
Bipartite graph matching computation on GPU
 in Proc. Intl. Conference Energy Minimization Methods in Computer Vision and Pattern Recognition
"... Abstract. The Bipartite Graph Matching Problem is a well studied topic in Graph Theory. Such matching relates pairs of nodes from two distinct sets by selecting a subset of the graph edges connecting them. Each edge selected has no common node as its end points to any other edge within the subset. W ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. The Bipartite Graph Matching Problem is a well studied topic in Graph Theory. Such matching relates pairs of nodes from two distinct sets by selecting a subset of the graph edges connecting them. Each edge selected has no common node as its end points to any other edge within the subset. When the considered graph has huge sets of nodes and edges the sequential approaches are impractical, specially for applications demanding fast results. In this paper we investigate how to compute such matching on Graphics Processing Units (GPUs) motivated by its increasing processing power made available with decreasing costs. We present a new dataparallel approach for computing bipartite graph matching that is efficiently computed on today’s graphics hardware and apply it to solve the correspondence between 3D samples taken over a time interval. 1
Distributed Approximation of Cellular Coverage
"... Abstract. We consider the following model of cellular networks. Each base station has a given finite capacity, and each client has some demand and profit. A client can be covered by a specific subset of the base stations, and its profit is obtained only if its demand is provided in full. The goal is ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. We consider the following model of cellular networks. Each base station has a given finite capacity, and each client has some demand and profit. A client can be covered by a specific subset of the base stations, and its profit is obtained only if its demand is provided in full. The goal is to assign clients to base stations, so that the overall profit is maximized subject to base station capacity constraints. In this work we present a distributed algorithm for the problem, that runs in polylogarithmic time, and guarantees an approximation ratio close to the best known ratio achievable by a centralized algorithm. 1
version
"... We present tradeoffs between time complexity t, bit complexity b, and message complexity m. Two communication parties can exchange Θ(m log(tb/m 2) + b) bits of information for m < √ bt and Θ(b) for m ≥ √ bt. This allows to derive lower bounds on the time complexity for distributed algorithms as we ..."
Abstract
 Add to MetaCart
We present tradeoffs between time complexity t, bit complexity b, and message complexity m. Two communication parties can exchange Θ(m log(tb/m 2) + b) bits of information for m < √ bt and Θ(b) for m ≥ √ bt. This allows to derive lower bounds on the time complexity for distributed algorithms as we demonstrate for the MIS and the coloring problems. We reduce the bitcomplexity of the stateofthe art O(∆) coloring algorithm without changing its time and message complexity. We also give techniques for several problems that require a time increase of t c (for an arbitrary constant c) to cut both bit and message complexity by Ω(log t). This improves on the traditional timecoding technique which does not allow to cut message complexity. 1
Towards a Complexity Theory for Local Distributed Computing ⇤
"... A central theme in distributed network algorithms concerns understanding and coping with the issue of locality. Yet despite considerable progress, research e↵orts in this direction have not yet resulted in a solid basis in the form of a fundamental computational complexity theory for locality. Inspi ..."
Abstract
 Add to MetaCart
A central theme in distributed network algorithms concerns understanding and coping with the issue of locality. Yet despite considerable progress, research e↵orts in this direction have not yet resulted in a solid basis in the form of a fundamental computational complexity theory for locality. Inspired by sequential complexity theory, we focus on a complexity theory for distributed decision problems. In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. We consider the standard LOCAL model of computation and define LD(t) (for local decision) as the class of decision problems that can be solved in t communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class BPLD(t, p, q), containing all languages for which there exists a randomized algorithm that runs in t rounds, accepts correct instances with probability at least p, and rejects incorrect ones with probability at least q. We
Trading Bit, Message, and Time Complexity of Distributed Algorithms
"... We present tradeoffs between time complexity t, bit complexity b, and message complexity m. Two communication parties can exchange Θ(m log(tb/m 2) + b) bits of information for m < √ bt and Θ(b) for m ≥ √ bt. This allows to derive lower bounds on the time complexity for distributed algorithms as we ..."
Abstract
 Add to MetaCart
We present tradeoffs between time complexity t, bit complexity b, and message complexity m. Two communication parties can exchange Θ(m log(tb/m 2) + b) bits of information for m < √ bt and Θ(b) for m ≥ √ bt. This allows to derive lower bounds on the time complexity for distributed algorithms as we demonstrate for the MIS and the coloring problems. We reduce the bitcomplexity of the stateofthe art O(∆) coloring algorithm without changing its time and message complexity. We also give techniques for several problems that require a time increase of t c (for an arbitrary constant c) to cut both bit and message complexity by Ω(log t). This improves on the traditional timecoding technique which does not allow to cut message complexity.