Results 1  10
of
14
Exact and Approximate Distances in Graphs  a survey
 In ESA
, 2001
"... We survey recent and not so recent results related to the computation of exact and approximate distances, and corresponding shortest, or almost shortest, paths in graphs. We consider many different settings and models and try to identify some remaining open problems. ..."
Abstract

Cited by 70 (0 self)
 Add to MetaCart
(Show Context)
We survey recent and not so recent results related to the computation of exact and approximate distances, and corresponding shortest, or almost shortest, paths in graphs. We consider many different settings and models and try to identify some remaining open problems.
A shortest path algorithm for realweighted undirected graphs
 in 13th ACMSIAM Symp. on Discrete Algs
, 1985
"... Abstract. We present a new scheme for computing shortest paths on realweighted undirected graphs in the fundamental comparisonaddition model. In an efficient preprocessing phase our algorithm creates a linearsize structure that facilitates singlesource shortest path computations in O(m log α) ti ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We present a new scheme for computing shortest paths on realweighted undirected graphs in the fundamental comparisonaddition model. In an efficient preprocessing phase our algorithm creates a linearsize structure that facilitates singlesource shortest path computations in O(m log α) time, where α = α(m, n) is the very slowly growing inverseAckermann function, m the number of edges, and n the number of vertices. As special cases our algorithm implies new bounds on both the allpairs and singlesource shortest paths problems. We solve the allpairs problem in O(mnlog α(m, n)) time and, if the ratio between the maximum and minimum edge lengths is bounded by n (log n)O(1) , we can solve the singlesource problem in O(m + nlog log n) time. Both these results are theoretical improvements over Dijkstra’s algorithm, which was the previous best for real weighted undirected graphs. Our algorithm takes the hierarchybased approach invented by Thorup. Key words. singlesource shortest paths, allpairs shortest paths, undirected graphs, Dijkstra’s
Faster Deterministic Dictionaries
 In 11 th Annual ACM Symposium on Discrete Algorithms (SODA
, 1999
"... We consider static dictionaries over the universe U = on a unitcost RAM with word size w. Construction of a static dictionary with linear space consumption and constant lookup time can be done in linear expected time by a randomized algorithm. In contrast, the best previous deterministic a ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
(Show Context)
We consider static dictionaries over the universe U = on a unitcost RAM with word size w. Construction of a static dictionary with linear space consumption and constant lookup time can be done in linear expected time by a randomized algorithm. In contrast, the best previous deterministic algorithm for constructing such a dictionary with n elements runs in time O(n ) for # > 0. This paper narrows the gap between deterministic and randomized algorithms exponentially, from the factor of to an O(log n) factor. The algorithm is weakly nonuniform, i.e. requires certain precomputed constants dependent on w. A byproduct of the result is a lookup time vs insertion time tradeo# for dynamic dictionaries, which is optimal for a certain class of deterministic hashing schemes.
Scaling algorithms for approximate and exact maximum weight matching
, 2011
"... The maximum cardinality and maximum weight matching problems can be solved in time Õ(m √ n), a bound that has resisted improvement despite decades of research. (Here m and n are the number of edges and vertices.) In this article we demonstrate that this “m √ n barrier ” is extremely fragile, in the ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
The maximum cardinality and maximum weight matching problems can be solved in time Õ(m √ n), a bound that has resisted improvement despite decades of research. (Here m and n are the number of edges and vertices.) In this article we demonstrate that this “m √ n barrier ” is extremely fragile, in the following sense. For any ɛ> 0, we give an algorithm that computes a (1 − ɛ)approximate maximum weight matching in O(mɛ −1 log ɛ −1) time, that is, optimal linear time for any fixed ɛ. Our algorithm is dramatically simpler than the best exact maximum weight matching algorithms on general graphs and should be appealing in all applications that can tolerate a negligible relative error. Our second contribution is a new exact maximum weight matching algorithm for integerweighted bipartite graphs that runs in time O(m √ n log N). This improves on the O(Nm √ n)time and O(m √ n log(nN))time algorithms known since the mid 1980s, for 1 ≪ log N ≪ log n. Here N is the maximum integer edge weight. 1
Lower Bounds for Fundamental Geometric Problems
 IN 5TH ANNUAL EUROPEAN SYMPOSIUM ON ALGORITHMS (ESA'97
, 1996
"... We develop lower bounds on the number of primitive operations required to solve several fundamental problems in computational geometry. For example, given a set of points in the plane, are any three colinear? Given a set of points and lines, does any point lie on a line? These and similar question ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
We develop lower bounds on the number of primitive operations required to solve several fundamental problems in computational geometry. For example, given a set of points in the plane, are any three colinear? Given a set of points and lines, does any point lie on a line? These and similar questions arise as subproblems or special cases of a large number of more complicated geometric problems, including point location, range searching, motion planning, collision detection, ray shooting, and hidden surface removal. Previously these problems were studied only in general models of computation, but known techniques for these models are too weak to prove useful results. Our approach is to consider, for each problem, a more specialized model of computation that is still rich enough to describe all known algorit...
Lower Bounds for Dynamic Algebraic Problems
, 1998
"... We consider dynamic evaluation of algebraic functions (matrix multiplication, determinant, convolution, Fourier transform, etc.) in the model of Reif and Tate; i.e., if f(x 1 ; : : : ; xn ) = (y 1 ; : : : ; y m ) is an algebraic problem, we consider serving online requests of the form \change i ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We consider dynamic evaluation of algebraic functions (matrix multiplication, determinant, convolution, Fourier transform, etc.) in the model of Reif and Tate; i.e., if f(x 1 ; : : : ; xn ) = (y 1 ; : : : ; y m ) is an algebraic problem, we consider serving online requests of the form \change input x i to value v" or \what is the value of output y i ?". We present techniques for showing lower bounds on the worst case time complexity per operation for such problems. The rst gives lower bounds in a wide range of rather powerful models (for instance history dependent algebraic computation trees over any in nite subset of a eld, the integer RAM, and the generalized real RAM model of BenAmram and Galil). Using this technique, we show optimal n) bounds for dynamic matrixvector product, dynamic matrix multiplication and dynamic discriminant and an n) lower bound for dynamic polynomial multiplication (convolution), providing a good match with Reif and Tate's O( n log n) upper bound. We also show linear lower bounds for dynamic determinant, matrix adjoint and matrix inverse and an n) lower bound for the elementary symmetric functions. The second technique is the communication complexity technique of Miltersen, Nisan, Safra, and Wigderson which we apply to the setting of dynamic algebraic problems, obtaining similar lower bounds in the word RAM model. The third technique gives lower bounds in the weaker straight line program model. Using this technique, we show an 457 n) = log log n) lower bound for dynamic discrete Fourier transform.
Generic Discrimination  Sorting and Partitioning Unshared Data in Linear Time
, 2008
"... We introduce the notion of discrimination as a generalization of both sorting and partitioning and show that worstcase lineartime discrimination functions (discriminators) can be defined generically, by (co)induction on an expressive language of order denotations. The generic definition yields di ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
We introduce the notion of discrimination as a generalization of both sorting and partitioning and show that worstcase lineartime discrimination functions (discriminators) can be defined generically, by (co)induction on an expressive language of order denotations. The generic definition yields discriminators that generalize both distributive sorting and multiset discrimination. The generic discriminator can be coded compactly using list comprehensions, with order denotations specified using Generalized Algebraic Data Types (GADTs). A GADTfree combinator formulation of discriminators is also given. We give some examples of the uses of discriminators, including a new mostsignificantdigit lexicographic sorting algorithm. Discriminators generalize binary comparison functions: They operate on n arguments at a time, but do not expose more information than the underlying equivalence, respectively ordering relation on the arguments. We argue that primitive types with equality (such as references in ML) and ordered types (such as the machine integer type), should expose their equality, respectively standard ordering relation, as discriminators: Having only a binary equality test on a type requires Θ(n 2) time to find all the occurrences of an element in a list of length n, for each element in the list, even if the equality test takes only constant time. A discriminator accomplishes this in linear time. Likewise, having only a (constanttime) comparison function requires Θ(n log n) time to sort a list of n elements. A discriminator can do this in linear time.
Generic topdown discrimination
, 2009
"... We introduce the notion of discrimination as a generalization of both sorting and partitioning and show that discriminators (discrimination functions) can be defined generically, by structural recursion on order and equivalence expressions denoting a rich class of total preorders and equivalence rel ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We introduce the notion of discrimination as a generalization of both sorting and partitioning and show that discriminators (discrimination functions) can be defined generically, by structural recursion on order and equivalence expressions denoting a rich class of total preorders and equivalence relations, respectively. Discriminators improve the asymptotic performance of generic comparisonbased sorting and partitioning, yet do not expose more information than the underlying ordering relation, respectively equivalence. For a large class of order and equivalence expressions, including all standard orders for firstorder recursive types, the discriminators execute in worstcase linear time. The generic discriminators can be coded compactly using list comprehensions, with order expressions specified using Generalized Algebraic Data Types (GADTs). We give some examples of the uses of discriminators, including a new mostsignificantdigit lexicographic sorting algorithm and type isomorphism with an associativecommutative operator. Full source code of discriminators and their applications is included. 1 We argue discriminators should be basic operations for primitive and abstract types with equality. The basic multiset discriminator for references, originally due to Paige et al., is shown to be both efficient and fully abstract: it finds all duplicates of all references occurring in a list in linear time without leaking information about their representation. In particular, it behaves deterministically in the presence of garbage collection and nondeterministic heap allocation even when references are represented as raw machine addresses. In contrast, having only a binary equality test as in ML requires Θ(n 2) time, and allowing hashing for performance reasons as in Java, makes execution nondeterministic and complicates garbage collection.
Melding Priority Queues
 In Proc. of 9th SWAT
, 2004
"... We show that any priority queue data structure that supports insert, delete, and findmin operations in pq(n) time, when n is an upper bound on the number of elements in the priority queue, can be converted into a priority queue data structure that also supports fast meld operations with essentially ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We show that any priority queue data structure that supports insert, delete, and findmin operations in pq(n) time, when n is an upper bound on the number of elements in the priority queue, can be converted into a priority queue data structure that also supports fast meld operations with essentially no increase in the amortized cost of the other operations. More specifically, the new data structure supports insert, meld and findmin operations in O(1) amortized time, and delete operations in O(pq(n) + α(n, n)) amortized time, where α(m, n) is a functional inverse of the Ackermann function. The construction is very simple, essentially just placing a nonmeldable priority queue at each node of a unionfind data structure. We also show that when all keys are integers in the range [1, N], we can replace n in the bound stated above by min{n, N}. Applying this result to nonmeldable priority queue data structures obtained recently by Thorup, and by Han and Thorup, we obtain meldable RAM priority queues with O(log log n) amortized cost per operation, or O ( √ log log n) expected amortized cost per operation, respectively. As a byproduct, we obtain improved algorithms for the minimum directed spanning tree problem in graphs with integer edge weights: A deterministic O(m log log n) time algorithm and a randomized O(m √ log log n) time algorithm. These bounds improve, for sparse enough graphs, on the O(m + n log n) running time of an algorithm by Gabow, Galil, Spencer and Tarjan that works for arbitrary edge weights.
P.: Windows into relational events: Data structures for contiguous subsequences of edges
 In: SODA
, 2013
"... We consider the problem of analyzing social network data sets in which the edges of the network have timestamps, and we wish to analyze the subgraphs formed from edges in contiguous subintervals of these timestamps. We provide data structures for these problems that use nearlinear preprocessing tim ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We consider the problem of analyzing social network data sets in which the edges of the network have timestamps, and we wish to analyze the subgraphs formed from edges in contiguous subintervals of these timestamps. We provide data structures for these problems that use nearlinear preprocessing time, linear space, and sublogarithmic query time to handle queries that ask for the number of connected components, number of components that contain cycles, number of vertices whose degree equals or is at most some predetermined value, number of vertices that can be reached from a starting set of vertices by timeincreasing paths, and related queries. ar X iv