Results 1  10
of
14
Data Structures for Traveling Salesmen
, 1995
"... The choice of data structure for tour representation plays a critical role in the efficiency of local improvement heuristics for the Traveling Salesman Problem. The tour data structure must permit queries about the relative order of cities in the current tour and must allow sections of the tour to b ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
The choice of data structure for tour representation plays a critical role in the efficiency of local improvement heuristics for the Traveling Salesman Problem. The tour data structure must permit queries about the relative order of cities in the current tour and must allow sections of the tour to be reversed. The traditional arraybased representation of a tour permits the relative order of cities to be determined in small constant time, but requires worstcase W(N) time (where N is the number of cities) to implement a reversal, which renders it impractical for large instances. This paper considers alternative tour data structures, examining them from both a theoretical and experimental point of view. The first alternative we consider is a data structure based on splay trees, where all queries and updates take amortized time O(logN). We show that this is close to the best possible, because in the cell probe model of computation any data structure must take worstcase amortized time W(...
Algorithms for dense graphs and networks on the random access computer
 ALGORITHMICA
, 1996
"... We improve upon the running time of several graph and network algorithms when applied to dense graphs. In particular, we show how to compute on a machine with word size L = f2 (log n) a maximal matching in an nvertex bipartite graph in time O (n 2 + n2"5/~.) = O (n2"5/log n), how to compute the t ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
We improve upon the running time of several graph and network algorithms when applied to dense graphs. In particular, we show how to compute on a machine with word size L = f2 (log n) a maximal matching in an nvertex bipartite graph in time O (n 2 + n2"5/~.) = O (n2"5/log n), how to compute the transitive closure of a digraph with n vertices and m edges in time O(n 2 + nm/,k), how to solve the uncapacitated transportation problem with integer costs in the range [0..C] and integer demands in the range [U..U] in time O ((n 3 (log log / log n) 1/2 + n 2 log U) log nC), and how to solve the assignment problem with integer costs in the range [0..C] in time O(n 2"5 log nC/(logn/loglog n)l/4). Assuming a suitably compressed input, we also show how to do depthfirst and breadthfirst search and how to compute strongly connected components and biconnected components in time O(n~. + n2/L), and how to solve the single source shortestpath problem with integer costs in the range [0..C] in time O(n²(log C)/log n). For the transitive closure algorithm we also report on the experiences with an implementation.
Computation of the Least Significant Set Bit
 In Proceedings of the 2nd Electrotechnical and Computer Science Conference, Portoroz
, 1993
"... We investigate the problem of computing of the least significant set bit in a word. We describe a constant time algorithm for the problem assuming a cell probe model of computation. 1 Introduction and Definitions The problem of computing an index of the least significant set bit in a word arises in ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
We investigate the problem of computing of the least significant set bit in a word. We describe a constant time algorithm for the problem assuming a cell probe model of computation. 1 Introduction and Definitions The problem of computing an index of the least significant set bit in a word arises in different issues of data organization such as bit representation of ordered sets etc. In this paper we describe a novel algorithm to compute the index. Our algorithm runs on a cell probe model of computation (c.f. [2, 4, 5]), which is a generalization of a random access machine model. We assume that the memory registers are of bounded size. The bits in the word (register) are enumerated from 0, which is the least significant bit, to m \Gamma 1, which is the most significant bit  the word is m bits wide. Using the terminology of Fredman and Saks [2] we are dealing with CPROB(m) model. Furthermore, we assume that we can perform in one unit of time arithmetic operations of multiplication, add...
Approximate Data Structures with Applications (Extended Abstract)
, 1994
"... In this paper we introduce the notion of approximate data structures, in which a small amount of error is tolerated in the output. Approximate data structures trade error of approximation for faster operation, leading to theoretical and practical speedups for a wide variety of algorithms. We give a ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
In this paper we introduce the notion of approximate data structures, in which a small amount of error is tolerated in the output. Approximate data structures trade error of approximation for faster operation, leading to theoretical and practical speedups for a wide variety of algorithms. We give approximate variants of the van Emde Boas data structure, which support the same dynamic operations as the standard van Emde Boas data structure [28, 20], except that answers to queries are approximate. The variants support all operations in constant time provided the error of approximation is 1/polylog(n), and in O(loglog n) time provided the error is 1/polynomial(n), for n elements in the data structure. We consider
Efficient Regular Data Structures and Algorithms for Dilation, Location and Proximity Problems
"... In this paper we investigate datastructures obtained by a recursive partitioning of the input domain into regions of equal size. One of the most well known examples of such a structure is the quadtree, used here as a basis for more complex data structures; we also provide multidimensional version ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
In this paper we investigate datastructures obtained by a recursive partitioning of the input domain into regions of equal size. One of the most well known examples of such a structure is the quadtree, used here as a basis for more complex data structures; we also provide multidimensional versions of the stratified tree by van Emde Boas [20]. We show that under the assumption that the input points have limited precision (i.e. are drawn from the integer grid of size u) these data structures yield efficient solutions to many important problems. In particular, they allow us to achieve O(log log u) time per operation for dynamic approximate nearest neighbor (under insertions and deletions) and exact online closest pair (under insertions only) in any constant dimension. They allow O(log log u) point location in a given planar shape or in its expansion (dilation by a ball of a given radius). Finally, we provide a linear time (optimal) algorithm for computing the expansion of a shape...
Performance evaluation of approximate priority queues
 Presented at DIMACS Fifth Implementation Challenge: Priority Queues, Dictionaries, and Point Sets, organized by
, 1996
"... We report on implementation and a modest experimental evaluation of a recently introduced priorityqueue data structure. The new data structure is designed to take advantage of fast operations on machine words and, as appropriate, reduced keyuniverse size and/or tolerance of approximate answers to ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
We report on implementation and a modest experimental evaluation of a recently introduced priorityqueue data structure. The new data structure is designed to take advantage of fast operations on machine words and, as appropriate, reduced keyuniverse size and/or tolerance of approximate answers to queries. In addition to standard priorityqueue operations, the data structure also supports successor and predecessor queries. Our results suggest that the data structure is practical and can be faster than traditional priority queues when holding a large number of keys, and that tolerance for approximate answers can lead to signi cant increases in speed.
On superlinear lower bounds in complexity theory
 In Proc. 10th Annual IEEE Conference on Structure in Complexity Theory
, 1995
"... This paper first surveys the neartotal lack of superlinear lower bounds in complexity theory, for “natural” computational problems with respect to many models of computation. We note that the dividing line between models where such bounds are known and those where none are known comes when the mode ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper first surveys the neartotal lack of superlinear lower bounds in complexity theory, for “natural” computational problems with respect to many models of computation. We note that the dividing line between models where such bounds are known and those where none are known comes when the model allows nonlocal communication with memory at unit cost. We study a model that imposes a “fair cost ” for nonlocal communication, and obtain modest superlinear lower bounds for some problems via a Kolmogorovcomplexity argument. Then we look to the larger picture of what it will take to prove really striking lower bounds, and pull from ours and others’ work a concept of information vicinity that may offer new tools and modes of analysis to a young field that rather lacks them.
Blasting Past Fusion Trees
"... We present an O(n log log n) worstcase time algorithm for sorting arbitrary integers, a significant improvement over the bound achieved by the fusion tree of Fredman and Willard. Model of computation We will consider a unit cost random access machine, RAM, with word length w and a memory composed ..."
Abstract
 Add to MetaCart
We present an O(n log log n) worstcase time algorithm for sorting arbitrary integers, a significant improvement over the bound achieved by the fusion tree of Fredman and Willard. Model of computation We will consider a unit cost random access machine, RAM, with word length w and a memory composed of 2 w words. The instruction set includes addition, subtraction, comparison, unrestricted shift and the bitwise boolean operations AND and OR. The algorithm We study the problem of sorting n integers in the range 1::2 b on a RAM. We say that T (n; b) f(n) if there exists an algorithm with worstcase time complexity f(n) that solves this sorting problem. The assertion that T (n; b 2 ) f(n) ) T (n; b 1 ) f(n) will be abbreviated as T (n; b 1 ) T (n; b 2 ). Similarly, T (n; b) O(g(n)) will be used to denote that there exists some f , f(n) = O(g(n)), such that T (n; b) f(n). A sequential version of an integer sorting algorithm by Albers and Hagerup [1] achieves T (n; w= logn) ...
Minimizing the Input/Output Bottleneck
, 1992
"... this paper, we assume that all graphs are undirected, an assumption that may not hold for certain applications such as hypertext and objectoriented databases. One important assumption of our model is that data may be multiply represented in blocks. This is a stronger assumption than that used, for ..."
Abstract
 Add to MetaCart
this paper, we assume that all graphs are undirected, an assumption that may not hold for certain applications such as hypertext and objectoriented databases. One important assumption of our model is that data may be multiply represented in blocks. This is a stronger assumption than that used, for example, by external
Wordbased RAM Priority Queues
, 2001
"... This report is aimed at the wordbased RAM priority queues. Suppose we have a wbit RAM and want to design priority queues that only handle integer keys represented by w bits. We denote the universe as U = f1; 2; : : : ; ug where u = 2 . A priority queue S that we consider supports the followi ..."
Abstract
 Add to MetaCart
This report is aimed at the wordbased RAM priority queues. Suppose we have a wbit RAM and want to design priority queues that only handle integer keys represented by w bits. We denote the universe as U = f1; 2; : : : ; ug where u = 2 . A priority queue S that we consider supports the following operations: Min(S): Return the element of S with the smallest key