• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Popular conjectures imply strong lower bounds for dynamic problems,” in FOCS, (2014)

by A Abboud, V Vassilevska Williams
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 11
Next 10 →

Consequences of Faster Alignment of Sequences

by Amir Abboud, Virginia Vassilevska Williams, Oren Weimann
"... Abstract. The Local Alignment problem is a classical problem with ap-plications in biology. Given two input strings and a scoring function on pairs of letters, one is asked to find the substrings of the two input strings that are most similar under the scoring function. The best algorithms for Local ..."
Abstract - Cited by 7 (3 self) - Add to MetaCart
Abstract. The Local Alignment problem is a classical problem with ap-plications in biology. Given two input strings and a scoring function on pairs of letters, one is asked to find the substrings of the two input strings that are most similar under the scoring function. The best algorithms for Local Alignment run in time that is roughly quadratic in the string length. It is a big open problem whether substantially subquadratic al-gorithms exist. In this paper we show that for all ε> 0, an O(n2−ε) time algorithm for Local Alignment on strings of length n would imply breakthroughs on three longstanding open problems: it would imply that for some δ> 0, 3SUM on n numbers is in O(n2−δ) time, CNF-SAT on n variables is in O((2 − δ)n) time, and Max Weight 4-Clique is in O(n4−δ) time. Our result for CNF-SAT also applies to the easier problem of find-ing the longest common substring of binary strings with don’t cares. We also give strong conditional lower bounds for the more general Multiple Local Alignment problem on k strings, under both k-wise and SP scor-ing, and for other string similarity problems such as Global Alignment with gap penalties and normalized Longest Common Subsequence. 1
(Show Context)

Citation Context

...lems, based on 3SUM, e.g. [19,37,22,13,7]. More recently, the 3-SUM Conjecture has been used in surprising ways to show polynomial lower bounds for purely combinatorial problems in dynamic algorithms =-=[40,2]-=- and Graph algorithms [40,32,48]. The only previous work relating 3-SUM to a Stringology problem, to our knowledge, is the result of Chen et al. [12] showing that under the 3-SUM Conjecture, when the ...

Succinct Sampling from Discrete Distributions

by Karl Bringmann, Kasper Green Larsen
"... We revisit the classic problem of sampling from a discrete distribution: Given n non-negative w-bit integers x1,..., xn, the task is to build a data structure that allows sampling i with probability proportional to xi. The classic solution is Walker’s alias method that takes, when implemented on a W ..."
Abstract - Cited by 3 (1 self) - Add to MetaCart
We revisit the classic problem of sampling from a discrete distribution: Given n non-negative w-bit integers x1,..., xn, the task is to build a data structure that allows sampling i with probability proportional to xi. The classic solution is Walker’s alias method that takes, when implemented on a Word RAM, O(n) preprocessing time, O(1) expected query time for one sample, and n(w+2 lg n+o(1)) bits of space. Using the terminology of succinct data structures, this solution has redundancy 2n lg n + o(n) bits, i.e., it uses 2n lg n + o(n) bits in addition to the information theoretic minimum required for storing the input. In this paper, we study whether this space usage can be improved. In the systematic case, in which the input is read-only, we present a novel data structure using r + O(w) redundant

Threesomes, degenerates, and love triangles

by Allan Grønlund, Seth Pettie - In Proc. 55th Annu. IEEE Sympos. Found. Comput. Sci. (FOCS , 2014
"... The 3SUM problem is to decide, given a set of n real numbers, whether any three sum to zero. It is widely conjectured that a trivial Opn2q-time algorithm is optimal and over the years the consequences of this conjecture have been revealed. This 3SUM conjecture implies Ωpn2q lower bounds on numerous ..."
Abstract - Cited by 3 (1 self) - Add to MetaCart
The 3SUM problem is to decide, given a set of n real numbers, whether any three sum to zero. It is widely conjectured that a trivial Opn2q-time algorithm is optimal and over the years the consequences of this conjecture have been revealed. This 3SUM conjecture implies Ωpn2q lower bounds on numerous problems in computational geometry and a variant of the conjecture implies strong lower bounds on triangle enumeration, dynamic graph algorithms, and string matching data structures. In this paper we refute the 3SUM conjecture. We prove that the decision tree complexity of 3SUM is Opn3{2?log nq and give two subquadratic 3SUM algorithms, a deterministic one run-ning in Opn2{plog n { log log nq2{3q time and a randomized one running in Opn2plog log nq2 { log nq time with high probability. Our results lead directly to improved bounds for k-variate lin-ear degeneracy testing for all odd k ě 3. The problem is to decide, given a linear function fpx1,..., xkq “ α0 `ř1ďiďk αixi and a set A Ă R, whether 0 P fpAkq. We show the decision tree complexity of this problem is Opnk{2?log nq. Finally, we give a subcubic algorithm for a generalization of the pmin,`q-product over real-valued matrices and apply it to the problem of finding zero-weight triangles in weighted graphs. We give a depth-Opn5{2?log nq decision tree for this problem, as well as an algorithm running in time Opn3plog log nq2 { log nq. 1
(Show Context)

Citation Context

...lexity theory inside P have been based on the conjectured hardness of certain archetypal problems, such as 3SUM, pmin,`q-matrix product, and CNF-SAT. See, for example, the conditional lower bounds in =-=[25, 32, 33, 27, 2, 3, 34, 16, 37]-=-. In this paper we study the complexity of 3SUM and related problems such as linear degeneracy testing (LDT) and finding zero-weight triangles. Let us define the problems formally. 3SUM: Given a set A...

Into the square - on the complexity of quadratic-time solvable problems

by Michele Borassi, Pierluigi Crescenzi, Michel Habib - CoRR
"... ar ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...ome generalization of matrix multiplication). Starting from these works, many other hardness results based on SETH have been published, and many of them deal with dynamic problems (see, for instance, =-=[1]-=-). As an example, it is worth mentioning the diameter computation, that is, given a graph, finding the maximum distance between two vertices. Despite numerous papers on the topic, no truly subquadrati...

Subcubic Equivalences Between Graph Centrality Problems, APSP and Diameter

by Amir Abboud, Fabrizio Grandoni, Virginia Vassilevska Williams
"... Measuring the importance of a node in a network is a major goal in the analysis of social networks, biological systems, transportation networks etc. Different centrality measures have been proposed to capture the notion of node importance. For example, the center of a graph is a node that minimizes ..."
Abstract - Cited by 2 (1 self) - Add to MetaCart
Measuring the importance of a node in a network is a major goal in the analysis of social networks, biological systems, transportation networks etc. Different centrality measures have been proposed to capture the notion of node importance. For example, the center of a graph is a node that minimizes the maximum distance to any other node (the latter distance is the radius of the graph). The median of a graph is a node that minimizes the sum of the distances to all other nodes. Informally, the betweenness centrality of a node w measures the fraction of shortest paths that have w as an intermediate node. Finally, the reach centrality of a node w is the smallest distance r such that any s-t shortest path passing through w has either s or t in the ball of radius r around w. The fastest known algorithms to compute the center and the median of a graph, and to compute the betweenness or reach centrality even of a single node take roughly cubic time in the number n of nodes in the input graph. It is open whether these problems admit truly subcubic algorithms, i.e. algorithms with running time Õ(n3−δ) for some constant δ> 01. We relate the complexity of the mentioned centrality problems to two classical problems for which no truly subcubic algorithm is known, namely All Pairs Shortest Paths (APSP) and Diameter. We show that Radius, Median and Betweenness Centrality are equivalent under subcubic reductions to APSP, i.e. that a truly subcubic algorithm for any of these problems im-plies a truly subcubic algorithm for all of them. We then show that Reach Centrality is equivalent to Diameter under subcubic reductions. The same holds for the problem of approximating Betweenness Centrality within any constant factor. Thus the latter two centrality problems could potentially be solved in truly subcubic time, even if APSP requires essentially cubic time.

Matching triangles and basing hardness on an extremely popular conjecture

by Amir Abboud, Virginia Vassilevska, Huacheng Yu - STOC'15 , 2015
"... ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
Abstract not found

Why walking the dog takes time: Fréchet distance has no strongly subquadratic algorithms unless SETH fails

by Karl Bringmann
"... 2014 The Fréchet distance is a well-studied and very popular measure of similarity of two curves. Many variants and extensions have been studied since Alt and Godau introduced this measure to computational geometry in 1991. Their original algorithm to compute the Fréchet distance of two polygonal ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
2014 The Fréchet distance is a well-studied and very popular measure of similarity of two curves. Many variants and extensions have been studied since Alt and Godau introduced this measure to computational geometry in 1991. Their original algorithm to compute the Fréchet distance of two polygonal curves with n vertices has a runtime of O(n2 log n). More than 20 years later, the state of the art algorithms for most variants still take time more than O(n2 / log n), but no matching lower bounds are known, not even under reasonable complexity theoretic assumptions. To obtain a conditional lower bound, in this paper we assume the Strong Exponential Time Hypothesis or, more precisely, that there is no O∗((2 − δ)N) algorithm for CNF-SAT for any δ> 0. Under this assumption we show that the Fréchet distance cannot be computed in strongly subquadratic time, i.e., in time O(n2−δ) for any δ> 0. This means that finding faster algorithms for the Fréchet distance is as hard as finding faster CNF-SAT algorithms, and the existence of a strongly subquadratic algorithm can be considered unlikely. Our result holds for both the continuous and the discrete Fréchet distance. We extend the main result in various directions. Based on the same assumption we (1) show non-existence of a strongly subquadratic 1.001-approximation, (2) present tight lower bounds in case the numbers of vertices of the two curves are imbalanced, and (3) examine realistic input assumptions (c-packed curves).
(Show Context)

Citation Context

...rved that one can use SETH and SETH′ to prove lower bounds for polynomial time problems such as k-Dominating Set and others [32], the diameter of sparse graphs [33], and dynamic connectivity problems =-=[1]-=-. However, it seems to be applicable only for few problems, e.g., it seems to be a wide open problem to prove that 3SUM has no strongly subquadratic algorithms unless SETH fails, similarly for matchin...

Algorithms for Algebraic Path Properties in Concurrent Systems of Constant Treewidth Components

by Krishnendu Chatterjee, Amir Kafshdar Goharshady , Rasmus Ibsen-jensen, Andreas Pavlogiannis , 2015
"... We study algorithmic questions for concurrent systems where the transitions are labeled from a complete, closed semiring, and path properties are algebraic with semiring operations. The algebraic path properties can model dataflow analysis problems, the short-est path problem, and many other natural ..."
Abstract - Add to MetaCart
We study algorithmic questions for concurrent systems where the transitions are labeled from a complete, closed semiring, and path properties are algebraic with semiring operations. The algebraic path properties can model dataflow analysis problems, the short-est path problem, and many other natural problems that arise in program analysis. We consider that each component of the concur-rent system is a graph with constant treewidth, a property satisfied by the controlflow graphs of most programs. We allow for multiple possible queries, which arise naturally in demand driven dataflow analysis. The study of multiple queries allows us to consider the tradeoff between the resource usage of the one-time preprocessing and for each individual query. The traditional approach constructs the product graph of all components and applies the best-known graph algorithm on the product. In this approach, even the answer

APPROXIMABILITY OF THE DISCRETE FRÉCHET DISTANCE

by Karl Bringmann, Wolfgang Mulzer - JOURNAL OF COMPUTATIONAL GEOMETRY , 2016
"... The Fréchet distance is a popular and widespread distance measure for point sequences and for curves. About two years ago, Agarwal et al. [SIAM J. Comput. 2014] presented a new (mildly) subquadratic algorithm for the discrete version of the problem. This spawned a flurry of activity that has led t ..."
Abstract - Add to MetaCart
The Fréchet distance is a popular and widespread distance measure for point sequences and for curves. About two years ago, Agarwal et al. [SIAM J. Comput. 2014] presented a new (mildly) subquadratic algorithm for the discrete version of the problem. This spawned a flurry of activity that has led to several new algorithms and lower bounds. In this paper, we study the approximability of the discrete Fréchet distance. Building on a recent result by Bringmann [FOCS 2014], we present a new conditional lower bound showing that strongly subquadratic algorithms for the discrete Fréchet distance are unlikely to exist, even in the one-dimensional case and even if the solution may be approximated up to a factor of 1.399. This raises the question of how well we can approximate the Fréchet distance (of two given d-dimensional point sequences of length n) in strongly subquadratic time. Previously, no general results were known. We present the first such algorithm by analysing the approx-imation ratio of a simple, linear-time greedy algorithm to be 2Θ(n). Moreover, we design an α-approximation algorithm that runs in time O(n log n + n2/α), for any α ∈ [1, n]. Hence, an nε-approximation of the Fréchet distance can be computed in strongly subquadratic time, for any ε> 0.

Decremental Single-Source Shortest Paths on Undirected Graphs in Near-Linear Total Update Time

by Monika Henzinger , Sebastian Krinninger , Danupon Nanongkai
"... Abstract-The decremental single-source shortest paths (SSSP) problem concerns maintaining the distances between a given source node s to every node in an n-node m-edge graph G undergoing edge deletions. While its static counterpart can be easily solved in near-linear time, this decremental problem ..."
Abstract - Add to MetaCart
Abstract-The decremental single-source shortest paths (SSSP) problem concerns maintaining the distances between a given source node s to every node in an n-node m-edge graph G undergoing edge deletions. While its static counterpart can be easily solved in near-linear time, this decremental problem is much more challenging even in the undirected unweighted case. In this case, the classic O(mn) total update time of Even and Shiloach (JACM 1981) In contrast to the previous results which rely on maintaining a sparse emulator, our algorithm relies on maintaining a socalled sparse (d, )-hop set introduced by Cohen (JACM 2000) in the PRAM literature. A (d, )-hop set of a graph G = (V, E) is a set E of weighted edges such that the distance between any pair of nodes in G can be (1 + )-approximated by their d-hop distance (given by a path containing at most d edges) on G = (V, E ∪E ). Our algorithm can maintain an (n o(1) , )-hop set of near-linear size in near-linear time under edge deletions. It is the first of its kind to the best of our knowledge. To maintain the distances on this hop set, we develop a monotone bounded-hop Even-Shiloach tree. It results from extending and combining the monotone Even-Shiloach tree of Henzinger, Krinninger, and Nanongkai (FOCS 2013) with the bounded-hop SSSP technique of Bernstein (STOC 2013). These two new tools might be of independent interest.
(Show Context)

Citation Context

...)-approximate O(mn logW )-time algorithm on directed weighted graphs. Very recently, we extended our insights from [7] to obtain a (1 + )-approximation algorithm with roughly O(mn0.986) time for decremental approximate SSSP in directed graphs [13], giving the first o(mn) time algorithm for the directed case, as well as other important problems such as single-source reachability and strongly-connected components [14], [15], [3]. This algorithm can also be extended to achieve a total update time of O(mn0.986 log2 W ) for the weighted case. Also very recently, Abboud and Vassilevska Williams [16] showed that “deamortizing” our algorithms in [13] might not be possible: a combinatorial algorithm with worst case update time and query time of O(n2−δ) (for some δ > 0) per deletion implies a faster combinatorial algorithm for Boolean matrix multiplication and, for the more general problem of maintaining the number of reachable nodes from a source under deletions (which our algorithms in [13] can do) a worst case update and query time of O(m1−δ) (for some δ > 0) will falsify the strong exponential time hypothesis. Our Results. Given the significance of the decremental SSSP problem, it is imp...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University