Results 1  10
of
20
Popular conjectures imply strong lower bounds for dynamic problems
 CoRR
"... Abstract—We consider several wellstudied problems in dynamic algorithms and prove that sufficient progress on any of them would imply a breakthrough on one of five major open problems in the theory of algorithms: 1) Is the 3SUM problem on n numbers in O(n2−ε) time for some ε> 0? 2) Can one dete ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
(Show Context)
Abstract—We consider several wellstudied problems in dynamic algorithms and prove that sufficient progress on any of them would imply a breakthrough on one of five major open problems in the theory of algorithms: 1) Is the 3SUM problem on n numbers in O(n2−ε) time for some ε> 0? 2) Can one determine the satisfiability of a CNF formula on n variables and poly n clauses in O((2 − ε)npoly n) time for some ε> 0? 3) Is the All Pairs Shortest Paths problem for graphs on n vertices in O(n3−ε) time for some ε> 0? 4) Is there a linear time algorithm that detects whether a given graph contains a triangle? 5) Is there an O(n3−ε) time combinatorial algorithm for n×n Boolean matrix multiplication? The problems we consider include dynamic versions of bipartite perfect matching, bipartite maximum weight matching, single source reachability, single source shortest paths, strong connectivity, subgraph connectivity, diameter approximation and some nongraph problems such as Pagh’s problem defined in a recent paper by Pǎtraşcu[STOC 2010]. Index Terms—dynamic algorithms; all pairs shortest paths; 3SUM; lower bounds; I.
Threesomes, degenerates, and love triangles
 In Proc. 55th Annu. IEEE Sympos. Found. Comput. Sci. (FOCS
, 2014
"... The 3SUM problem is to decide, given a set of n real numbers, whether any three sum to zero. It is widely conjectured that a trivial Opn2qtime algorithm is optimal and over the years the consequences of this conjecture have been revealed. This 3SUM conjecture implies Ωpn2q lower bounds on numerous ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
The 3SUM problem is to decide, given a set of n real numbers, whether any three sum to zero. It is widely conjectured that a trivial Opn2qtime algorithm is optimal and over the years the consequences of this conjecture have been revealed. This 3SUM conjecture implies Ωpn2q lower bounds on numerous problems in computational geometry and a variant of the conjecture implies strong lower bounds on triangle enumeration, dynamic graph algorithms, and string matching data structures. In this paper we refute the 3SUM conjecture. We prove that the decision tree complexity of 3SUM is Opn3{2?log nq and give two subquadratic 3SUM algorithms, a deterministic one running in Opn2{plog n { log log nq2{3q time and a randomized one running in Opn2plog log nq2 { log nq time with high probability. Our results lead directly to improved bounds for kvariate linear degeneracy testing for all odd k ě 3. The problem is to decide, given a linear function fpx1,..., xkq “ α0 `ř1ďiďk αixi and a set A Ă R, whether 0 P fpAkq. We show the decision tree complexity of this problem is Opnk{2?log nq. Finally, we give a subcubic algorithm for a generalization of the pmin,`qproduct over realvalued matrices and apply it to the problem of finding zeroweight triangles in weighted graphs. We give a depthOpn5{2?log nq decision tree for this problem, as well as an algorithm running in time Opn3plog log nq2 { log nq. 1
More applications of the polynomial method to algorithm design
, 2015
"... In lowdepth circuit complexity, the polynomial method is a way to prove lower bounds by translating weak circuits into lowdegree polynomials, then analyzing properties of these polynomials. Recently, this method found an application to algorithm design: Williams (STOC 2014) used it to compute all ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
In lowdepth circuit complexity, the polynomial method is a way to prove lower bounds by translating weak circuits into lowdegree polynomials, then analyzing properties of these polynomials. Recently, this method found an application to algorithm design: Williams (STOC 2014) used it to compute allpairs shortest paths in n3/2Ω( logn) time on dense nnode graphs. In this paper, we extend this methodology to solve a number of problems in combinatorial pattern matching and Boolean algebra, considerably faster than previously known methods. First, we give an algorithm for BOOLEAN ORTHOGONAL DETECTION, which is to detect among two sets A,B ⊆ {0, 1}d of size n if there is an x ∈ A and y ∈ B such that 〈x, y 〉 = 0. For vectors of dimension d = c(n) logn, we solve BOOLEAN ORTHOGONAL DETECTION in n2−1/O(log c(n)) time by a Monte Carlo randomized algorithm. We apply this as a subroutine in several other new algorithms: • In BATCH PARTIAL MATCH, we are given n query strings from from {0, 1,?}c(n) logn (? is a “don’t care”), n strings from {0, 1}c(n) logn, and wish to determine for each query whether or not there is a string matching the query. We solve this problem in n2−1/O(log c(n)) time by a Monte Carlo randomized algorithm. • Let t ≤ v be integers. Given a DNF F on c log t variables with t terms, and v arbitrary assignments on the variables, F can be evaluated on all v assignments in v · t1−1/O(log c) time, with high probability. • There is a randomized algorithm that solves the Longest Common Substring with don’t cares problem on two strings of length n in n2/2Ω( logn) time. • Given two strings S, T of length n, there is a randomized algorithm that computes the length of the longest substring of S that has EditDistance less than k to a substring of T in k1.5n2/2Ω( logn
Subcubic Equivalences Between Graph Centrality Problems, APSP and Diameter
"... Measuring the importance of a node in a network is a major goal in the analysis of social networks, biological systems, transportation networks etc. Different centrality measures have been proposed to capture the notion of node importance. For example, the center of a graph is a node that minimizes ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Measuring the importance of a node in a network is a major goal in the analysis of social networks, biological systems, transportation networks etc. Different centrality measures have been proposed to capture the notion of node importance. For example, the center of a graph is a node that minimizes the maximum distance to any other node (the latter distance is the radius of the graph). The median of a graph is a node that minimizes the sum of the distances to all other nodes. Informally, the betweenness centrality of a node w measures the fraction of shortest paths that have w as an intermediate node. Finally, the reach centrality of a node w is the smallest distance r such that any st shortest path passing through w has either s or t in the ball of radius r around w. The fastest known algorithms to compute the center and the median of a graph, and to compute the betweenness or reach centrality even of a single node take roughly cubic time in the number n of nodes in the input graph. It is open whether these problems admit truly subcubic algorithms, i.e. algorithms with running time Õ(n3−δ) for some constant δ> 01. We relate the complexity of the mentioned centrality problems to two classical problems for which no truly subcubic algorithm is known, namely All Pairs Shortest Paths (APSP) and Diameter. We show that Radius, Median and Betweenness Centrality are equivalent under subcubic reductions to APSP, i.e. that a truly subcubic algorithm for any of these problems implies a truly subcubic algorithm for all of them. We then show that Reach Centrality is equivalent to Diameter under subcubic reductions. The same holds for the problem of approximating Betweenness Centrality within any constant factor. Thus the latter two centrality problems could potentially be solved in truly subcubic time, even if APSP requires essentially cubic time.
Improved Quantum Algorithm for Triangle Finding via Combinatorial Arguments
"... Background. Triangle finding is a graphtheoretic problem whose complexity is deeply connected to the complexity of several other computational tasks in theoretical computer science, such as solving path or matrix problems [3, 8, 9, 13, 18, 17, 19]. In its standard version (sometimes called unweight ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Background. Triangle finding is a graphtheoretic problem whose complexity is deeply connected to the complexity of several other computational tasks in theoretical computer science, such as solving path or matrix problems [3, 8, 9, 13, 18, 17, 19]. In its standard version (sometimes called unweighted triangle finding), it asks to find, given an undirected and unweighted graph G = (V,E), three vertices v1, v2, v3 ∈ V such that {v1, v2}, {v1, v3} and {v2, v3} are edges of the graph. Problems like triangle finding can be studied in the query complexity setting. In the usual model used to describe the query complexity of such problems, the set of edges E of the graph is unknown but can be accessed through an oracle: given two vertices u and v in V, one query to the oracle outputs one if {u, v} ∈ E and zero if {u, v} / ∈ E. In the quantum query complexity setting, one further assume that the oracle can be queried in superposition. Besides its intrinsic interest, the triangle finding problem has been one of the main problems that stimulated the development of new techniques in quantum query complexity, and the history of improvement of upper bounds on the query complexity of triangle finding parallels the development of general techniques in the quantum complexity setting, as we explain below. Grover search immediately gives, when applied to triangle finding as a search over the space of triples of vertices of the graph, a quantum algorithm with query complexity O(n3/2). Using amplitude
On Hardness of Jumbled Indexing
"... Abstract. Jumbled indexing is the problem of indexing a text T for queries that ask whether there is a substring of T matching a pattern represented as a Parikh vector, i.e., the vector of frequency counts for each character. Jumbled indexing has garnered a lot of interest in the last ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Jumbled indexing is the problem of indexing a text T for queries that ask whether there is a substring of T matching a pattern represented as a Parikh vector, i.e., the vector of frequency counts for each character. Jumbled indexing has garnered a lot of interest in the last
Matching triangles and basing hardness on an extremely popular conjecture
 STOC'15
, 2015
"... ..."
Approximating the diameter of planar graphs in near linear time
 In Proc. ICALP
, 2013
"... We present a (1 + ε)approximation algorithm running in O( f (ε) · n log 4 n) time for finding the diameter of an undirected planar graph with n vertices and with nonnegative edge lengths. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present a (1 + ε)approximation algorithm running in O( f (ε) · n log 4 n) time for finding the diameter of an undirected planar graph with n vertices and with nonnegative edge lengths.
On lower bounds for the Maximum Consecutive Subsums Problem and the (min,+)convolution
"... AbstractGiven a sequence of n numbers, the MAXIMUM CONSECUTIVE SUBSUMS PROBLEM (MCSP) asks for the maximum consecutive sum of lengths for each = 1, . . . , n. No algorithm is known for this problem which is significantly better than the naive quadratic solution. Nor a super linear lower bound is k ..."
Abstract
 Add to MetaCart
AbstractGiven a sequence of n numbers, the MAXIMUM CONSECUTIVE SUBSUMS PROBLEM (MCSP) asks for the maximum consecutive sum of lengths for each = 1, . . . , n. No algorithm is known for this problem which is significantly better than the naive quadratic solution. Nor a super linear lower bound is known. The best known bound for the MCSP is based on the the computation of the (min, +)convolution, another problem for which neither an O(n 2− ) upper bound is known nor a super linear lower bound. We show that the two problems are in fact computationally equivalent by providing linear reductions between them. Then, we concentrate on the problem of finding super linear lower bounds and provide empirical evidence for our conjecture that the solution of both problems requires Ω(n log n) time in the decision tree model.