Results 1 
8 of
8
Regularity lemmas and combinatorial algorithms
 In Proc. FOCS
"... Abstract — We present new combinatorial algorithms for Boolean matrix multiplication (BMM) and preprocessing a graph to answer independent set queries. We give the first asymptotic improvements on combinatorial algorithms for dense BMM in many years, improving on the “Four Russians ” O(n 3 /(w log n ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Abstract — We present new combinatorial algorithms for Boolean matrix multiplication (BMM) and preprocessing a graph to answer independent set queries. We give the first asymptotic improvements on combinatorial algorithms for dense BMM in many years, improving on the “Four Russians ” O(n 3 /(w log n)) bound for machine models with wordsize w. (For a pointer machine, we can set w = log n.) The algorithms utilize notions from Regularity Lemmas for graphs in a novel way. • We give two randomized combinatorial algorithms for BMM. The first algorithm is essentially a reduction from BMM to the Triangle Removal Lemma. The best known bounds for the Triangle Removal Lemma only imply an O ` (n 3 log β)/(βw log n) ´ time algorithm for BMM where β = (log ∗ n) δ for some δ> 0, but improvements on the Triangle Removal Lemma would yield corresponding runtime improvements. The second algorithm applies the Weak Regularity Lemma of Frieze and Kannan along with “ several information compression ideas, running in O n 3 (log log n) 2 /(log n) 9/4 ”) time with probability exponentially “ close to 1. When w ≥ log n, it can be implemented in O n 3 (log log n) 2 /(w log n) 7/6 ”) time. Our results immediately imply improved combinatorial methods for CFG parsing, detecting trianglefreeness, and transitive closure. Using Weak Regularity, we also give an algorithm for answering queries of the form is S ⊆ V an independent set? in a graph. Improving on prior work, we show how to randomly preprocess a graph in O(n 2+ε) time (for all ε> 0) so that with high probability, all subsequent batches of log n independent “ set queries can be answered deterministically in O n 2 (log log n) 2 /((log n) 5/4 ”) time. When w ≥ log n, w queries can be answered in O n 2 (log log n) 2 /((log n) 7/6 ” time. In addition to its nice applications, this problem is interesting in that it is not known how to do better than O(n 2) using “algebraic ” methods. 1.
Networks Cannot Compute Their Diameter in Sublinear Time preliminary version please check for updates
, 2011
"... We study the problem of computing the diameter of a network in a distributed way. The model of distributed computation we consider is: in each synchronous round, each node can transmit a different (but short) message to each of its neighbors. We provide an ˜ Ω(n) lower bound for the number of commun ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
We study the problem of computing the diameter of a network in a distributed way. The model of distributed computation we consider is: in each synchronous round, each node can transmit a different (but short) message to each of its neighbors. We provide an ˜ Ω(n) lower bound for the number of communication rounds needed, where n denotes the number of nodes in the network. This lower bound is valid even if the diameter of the network is a small constant. We also show that a (3/2 − ε)approximation of the diameter requires ˜ Ω ( √ n) rounds. Furthermore we use our new technique to prove an ˜ Ω ( √ n) lower bound on approximating the girth of a graph by a factor 2 − ε. Contact author:
Efficient Algorithms for Path Problems in Weighted Graphs
, 2008
"... Problems related to computing optimal paths have been abundant in computer science since its emergence as a field. Yet for a large number of such problems we still do not know whether the stateoftheart algorithms are the best possible. A notable example of this phenomenon is the all pairs shorte ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Problems related to computing optimal paths have been abundant in computer science since its emergence as a field. Yet for a large number of such problems we still do not know whether the stateoftheart algorithms are the best possible. A notable example of this phenomenon is the all pairs shortest paths problem in a directed graph with real edge weights. The best algorithm (modulo small polylogarithmic improvements) for this problem runs in cubic time, a running time known since the 1960s (by Floyd and Warshall). Our grasp of many such fundamental algorithmic questions is far from optimal, and the major goal of this thesis is to bring some new insights into efficiently solving path problems in graphs. We focus on several path problems optimizing different measures: shortest paths, maximum bottleneck paths, minimum nondecreasing paths, and various extensions. For the allpairs versions of these path problems we use an algebraic approach. We obtain improved algorithms using reductions
Max flows in O(nm) time, or better
, 2012
"... In this paper, we present improved polynomial time algorithms for the max flow problem defined on a network with n nodes and m arcs. We show how to solve the max flow problem in O(nm) time, improving upon the best previous algorithm due to King, Rao, and Tarjan, who solved the max flow problem in O( ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, we present improved polynomial time algorithms for the max flow problem defined on a network with n nodes and m arcs. We show how to solve the max flow problem in O(nm) time, improving upon the best previous algorithm due to King, Rao, and Tarjan, who solved the max flow problem in O(nm log m/(n log n) n) time. In the case that m = O(n), we improve the running time to O(n 2 / log n). We further improve the running time in the case that U ∗ = Umax/Umin is not too large, where Umax denotes the largest finite capacity and Umin denotes the smallest nonzero capacity. If log(U ∗ ) = O(n 1/3 log −3 n), we show how to solve the max flow problem in O(nm / log n) steps. In the case that log(U ∗ ) = O(log k n) for some fixed positive integer k, we show how to solve the max flow problem in Õ(n8/3) time. This latter algorithm relies on a subroutine for fast matrix multiplication. 1
Fast Approximation Algorithms for the Diameter and Radius of Sparse Graphs
"... The diameter and the radius of a graph are fundamental topological parameters that have many important practical applications in real world networks. The fastest combinatorial algorithm for both parameters works by solving the allpairs shortest paths problem (APSP) and has a running time of Õ(mn) i ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The diameter and the radius of a graph are fundamental topological parameters that have many important practical applications in real world networks. The fastest combinatorial algorithm for both parameters works by solving the allpairs shortest paths problem (APSP) and has a running time of Õ(mn) in medge, nnode graphs. In a seminal paper, Aingworth, Chekuri, Indyk and Motwani [SODA’96 and SICOMP’99] presented an algorithm that computes in Õ(m √ n+n 2) time an estimate ˆ D for the diameter D, such that ⌊2/3D ⌋ ≤ ˆ D ≤ D. Their paper spawned a long line of research on approximate APSP. For the specific problem of diameter approximation, however, no improvement has been achieved in over 15 years. Our paper presents the first improvement over the diameter approximation algorithm of Aingworth et al., producing an algorithm with the same estimate but with an expected running time of Õ(m √ n). We thus show that for all sparse enough graphs, the diameter can be 3/2approximated in o(n 2) time. Our algorithm is obtained using a surprisingly simple method of neighborhood depth estimation that is strong enough to also approximate, in the same running time, the radius and more generally, all of the eccentricities, i.e. for every node the distance to its furthest node. Wealsoprovidestrongevidencethatourdiameterapproximation result may be hard to improve. We show that if for some constant ε> 0 there is an O(m 2−ε) time (3/2 − ε)approximation algorithm for the diameter of undirected unweighted graphs, then there is an O ∗ ((2 − δ) n) time algorithm for CNFSAT on n variables for constant δ> 0, and the strong exponential time hypothesis of [Impagliazzo, Paturi, Zane JCSS’01] is false.
Chapter 11
"... This chapter is on “hard ” problems in distributed computing. In sequential computing, there are NPhard problems which are conjectured to take exponential time. Is there something similar in distributed computing? Using flooding/echo (Algorithms 11,12) from Chapter 3, everything so far was solvable ..."
Abstract
 Add to MetaCart
This chapter is on “hard ” problems in distributed computing. In sequential computing, there are NPhard problems which are conjectured to take exponential time. Is there something similar in distributed computing? Using flooding/echo (Algorithms 11,12) from Chapter 3, everything so far was solvable basically in O(D) time, where D is the diameter of the network.
116 CHAPTER 11. HARD PROBLEMS
"... getting delayed in some nodes but not in others. The flooding might not use edges of a BFS tree anymore! These floodings might not compute correct distances anymore! On the other hand we know that the maximal message size in Algorithm 45 is O(n log n). So we could just simulate each of these “big me ..."
Abstract
 Add to MetaCart
getting delayed in some nodes but not in others. The flooding might not use edges of a BFS tree anymore! These floodings might not compute correct distances anymore! On the other hand we know that the maximal message size in Algorithm 45 is O(n log n). So we could just simulate each of these “big message ” rounds by n “small message ” rounds using small messages. This yields a runtime of O(nD) which is not desirable. A third possible approach is “starting each flooding/echo one after each other” and results in O(nD) in the worst case as well. • So let us fix above algorithm! The key idea is to arrange the floodingecho processes in a more organized way: Start the flooding processes in a certain order and prove that at any time, each node is only involved in one flooding. This is realized in Algorithm 46. Definition 11.1. (BFSv) Performing a breadth first search at node v produces spanning tree BFSv (see Chapter 3). This takes time O(D) using small messages. Remarks: