Results 1  10
of
27
Increasing internet capacity using local search
 Computational Optimization and Applications
, 2004
"... but often the main goal is to avoid congestion, i.e. overloading of links, and the standard heuristic recommended by Cisco (a major router vendor) is to make the weight of a link inversely proportional to its capacity. We study the problem of optimizing OSPF weights for a given a set of projected de ..."
Abstract

Cited by 69 (8 self)
 Add to MetaCart
but often the main goal is to avoid congestion, i.e. overloading of links, and the standard heuristic recommended by Cisco (a major router vendor) is to make the weight of a link inversely proportional to its capacity. We study the problem of optimizing OSPF weights for a given a set of projected demands so as to avoid congestion. We show this problem is NPhard and propose a local search heuristic to solve it. We also provide worstcase results about the performance of OSPF routing vs. an optimal multicommodity flow routing. Our numerical experiments compare the results obtained with our local search heuristic to the optimal multicommodity flow routing, as well as simple and commonly used heuristics for setting the weights. Experiments were done with a proposed nextgeneration AT&T WorldNet backbone as well as synthetic internetworks.
Optimal Bounds for the Predecessor Problem
 In Proceedings of the ThirtyFirst Annual ACM Symposium on Theory of Computing
"... We obtain matching upper and lower bounds for the amount of time to find the predecessor of a given element among the elements of a fixed efficiently stored set. Our algorithms are for the unitcost wordlevel RAM with multiplication and extend to give optimal dynamic algorithms. The lower bounds ar ..."
Abstract

Cited by 63 (0 self)
 Add to MetaCart
We obtain matching upper and lower bounds for the amount of time to find the predecessor of a given element among the elements of a fixed efficiently stored set. Our algorithms are for the unitcost wordlevel RAM with multiplication and extend to give optimal dynamic algorithms. The lower bounds are proved in a much stronger communication game model, but they apply to the cell probe and RAM models and to both static and dynamic predecessor problems.
Deterministic Dictionaries
, 2001
"... It is shown that a static dictionary that offers constanttime access to n elements with wbit keys and occupies O(n) words of memory can be constructed deterministically in O(n log n) time on a unitcost RAM with word length w and a standard instruction set including multiplication. Whereas a rando ..."
Abstract

Cited by 34 (4 self)
 Add to MetaCart
It is shown that a static dictionary that offers constanttime access to n elements with wbit keys and occupies O(n) words of memory can be constructed deterministically in O(n log n) time on a unitcost RAM with word length w and a standard instruction set including multiplication. Whereas a randomized construction working in linear expected time was known, the running time of the best previous deterministic algorithm was Ω(n²). Using a standard dynamization technique, the first deterministic dynamic dictionary with constant lookup time and sublinear update time is derived. The new algorithms are weakly nonuniform; i.e., they require access to a fixed number of precomputed constants dependent on w. The main technical tools employed are unitcost errorcorrecting codes, word parallelism, and derandomization using conditional expectations.
Error Correcting Codes, Perfect Hashing Circuits, and Deterministic Dynamic Dictionaries
, 1997
"... We consider dictionaries of size n over the finite universe U = and introduce a new technique for their implementation: error correcting codes. The use of such codes makes it possible to replace the use of strong forms of hashing, such as universal hashing, with much weaker forms, such as clus ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
We consider dictionaries of size n over the finite universe U = and introduce a new technique for their implementation: error correcting codes. The use of such codes makes it possible to replace the use of strong forms of hashing, such as universal hashing, with much weaker forms, such as clustering. We use
Optimal static range reporting in one dimension
 IN PROC. 33RD ACM SYMPOSIUM ON THEORY OF COMPUTING (STOC'01)
, 2001
"... ..."
Finding, minimizing, and counting weighted subgraphs
 In Proceedings of the FourtyFirst Annual ACM Symposium on the Theory of Computing
, 2009
"... For a pattern graph H on k nodes, we consider the problems of finding and counting the number of (not necessarily induced) copies of H in a given large graph G on n nodes, as well as finding minimum weight copies in both nodeweighted and edgeweighted graphs. Our results include: • The number of cop ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
For a pattern graph H on k nodes, we consider the problems of finding and counting the number of (not necessarily induced) copies of H in a given large graph G on n nodes, as well as finding minimum weight copies in both nodeweighted and edgeweighted graphs. Our results include: • The number of copies of an H with an independent set of size s can be computed exactly in O ∗ (2 s n k−s+3) time. A minimum weight copy of such an H (with arbitrary real weights on nodes and edges) can be found in O(4 s+o(s) n k−s+3) time. (The O ∗ notation omits poly(k) factors.) These algorithms rely on fast algorithms for computing the permanent of a k × n matrix, over rings and semirings. • The number of copies of any H having minimum (or maximum) nodeweight (with arbitrary real weights on nodes) can be found in O(n ωk/3 + n 2k/3+o(1) ) time, where ω < 2.4 is the matrix multiplication exponent and k is divisible by 3. Similar results hold for other values of k. Also, the number of copies having exactly a prescribed weight can be found within this time. These algorithms extend the technique of Czumaj and Lingas (SODA 2007) and give a new (algorithmic) application of multiparty communication complexity. • Finding an edgeweighted triangle of weight exactly 0 in general graphs requires Ω(n 2.5−ε) time for all ε> 0, unless the 3SUM problem on N numbers can be solved in O(N 2−ε) time. This suggests that the edgeweighted problem is much harder than its nodeweighted version. 1
Bounds on the OBDDSize of Integer Multiplication via Universal Hashing
, 2005
"... Bryant [5] has shown that any OBDD for the function MULn−1,n, i.e. the middle bit of the nbit multiplication, requires at least 2 n/8 nodes. In this paper a stronger lower bound of essentially 2 n/2 /61 is proven by a new technique, using a universal family of hash functions. As a consequence, one ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Bryant [5] has shown that any OBDD for the function MULn−1,n, i.e. the middle bit of the nbit multiplication, requires at least 2 n/8 nodes. In this paper a stronger lower bound of essentially 2 n/2 /61 is proven by a new technique, using a universal family of hash functions. As a consequence, one cannot hope anymore to verify e.g. 128bit multiplication circuits using OBDDtechniques because the representation of the middle bit of such a multiplier requires more than 3 · 10 17 OBDDnodes. Further, a first nontrivial upper bound of 7/3 · 2 4n/3 for the OBDDsize of MULn−1,n is provided.
Subquadratic algorithms for 3SUM
 In Proc. 9th Worksh. Algorithms & Data Structures, LNCS 3608
, 2005
"... We obtain subquadratic algorithms for 3SUM on integers and rationals in several models. On a standard word RAM with wbit words, we obtain a running time of O(n 2 / max { w lg 2 w, lg 2 n (lg lg n) 2}). In the circuit RAM with one nonstandard AC0 operation, we obtain O(n2 / w2 lg2). In external w me ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
We obtain subquadratic algorithms for 3SUM on integers and rationals in several models. On a standard word RAM with wbit words, we obtain a running time of O(n 2 / max { w lg 2 w, lg 2 n (lg lg n) 2}). In the circuit RAM with one nonstandard AC0 operation, we obtain O(n2 / w2 lg2). In external w memory, we achieve O(n2 /(MB)), even under the standard assumption of data indivisibility. Cacheobliviously, we obtain a running time of O(n2 / MB lg2). In all cases, our speedup is almost M quadratic in the parallelism the model can afford, which may be the best possible. Our algorithms are Las Vegas randomized; time bounds hold in expectation, and in most cases, with high probability. 1
Uniform Hashing in Constant Time and Linear Space
, 2003
"... Many algorithms and data structures employing hashing have been analyzed under the uniform hashing assumption, i.e., the assumption that hash functions behave like truly random functions. Starting with the discovery of universal hash functions, many researchers have studied to what extent this theor ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Many algorithms and data structures employing hashing have been analyzed under the uniform hashing assumption, i.e., the assumption that hash functions behave like truly random functions. Starting with the discovery of universal hash functions, many researchers have studied to what extent this theoretical ideal can be realized by hash functions that do not take up too much space and can be evaluated quickly. In this paper we present an almost ideal solution to this problem: A hash function that, on any set of n inputs, behaves like a truly random function with high probability, can be evaluated in constant time on a RAM, and can be stored in O(n) words, which is optimal. For many hashing schemes this is the first hash function that makes their uniform hashing analysis come true, with high probability, without incurring overhead in time or space.