Results 21  30
of
112
Enumerating Solutions to P(a) + Q(b) = R(c) + S(d)
, 1999
"... Let p; q; r; s be polynomials with integer coecients. This paper presents a fast method, using very little temporary storage, to nd all small integers (a; b; c; d) satisfying p(a)+q(b) = r(c)+s(d). Numerical results include all small solutions to a ; all small solutions to a ; ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Let p; q; r; s be polynomials with integer coecients. This paper presents a fast method, using very little temporary storage, to nd all small integers (a; b; c; d) satisfying p(a)+q(b) = r(c)+s(d). Numerical results include all small solutions to a ; all small solutions to a ; and the smallest positive integer that can be written in 5 ways as a sum of two coprime cubes.
A framework for speeding up priorityqueue operations
, 2004
"... Abstract. We introduce a framework for reducing the number of element comparisons performed in priorityqueue operations. In particular, we give a priority queue which guarantees the worstcase cost of O(1) per minimum finding and insertion, and the worstcase cost of O(log n) with at most log n + O ..."
Abstract

Cited by 8 (8 self)
 Add to MetaCart
Abstract. We introduce a framework for reducing the number of element comparisons performed in priorityqueue operations. In particular, we give a priority queue which guarantees the worstcase cost of O(1) per minimum finding and insertion, and the worstcase cost of O(log n) with at most log n + O(1) element comparisons per minimum deletion and deletion, improving the bound of 2log n + O(1) on the number of element comparisons known for binomial queues. Here, n denotes the number of elements stored in the data structure prior to the operation in question, and log n equals max {1,log 2 n}. We also give a priority queue that provides, in addition to the abovementioned methods, the prioritydecrease (or decreasekey) method. This priority queue achieves the worstcase cost of O(1) per minimum finding, insertion, and priority decrease; and the worstcase cost of O(log n) with at most log n + O(log log n) element comparisons per minimum deletion and deletion. CR Classification. E.1 [Data Structures]: Lists, stacks, and queues; E.2 [Data
The complexity of constructing evolutionary trees using experiments
, 2001
"... We present tight upper and lower bounds for the problem of constructing evolutionary trees in the experiment model. We describe an algorithm which constructs an evolutionary tree of n species in time O(nd log d n) using at most n⌈d/2⌉(log 2⌈d/2⌉−1 n+O(1)) experiments for d> 2, and at most n(log n+ ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We present tight upper and lower bounds for the problem of constructing evolutionary trees in the experiment model. We describe an algorithm which constructs an evolutionary tree of n species in time O(nd log d n) using at most n⌈d/2⌉(log 2⌈d/2⌉−1 n+O(1)) experiments for d> 2, and at most n(log n+O(1)) experiments for d = 2, where d is the degree of the tree. This improves the previous best upper bound by a factor Θ(log d). For d = 2 the previously best algorithm with running time O(n log n) had a bound of 4n log n on the number of experiments. By an explicit adversary argument, we show an Ω(nd log d n) lower bound, matching our upper bounds and improving the previous best lower bound by a factor Θ(log d n). Central to our algorithm is the construction and maintenance of separator trees of small height, which may be of independent interest.
Graphs for Metric Space Searching
, 2008
"... The problem of Similarity Searching consists in finding the elements from a set which are similar to a given query under some criterion. If the similarity is expressed by means of a metric, the problem is called Metric Space Searching. In this thesis we present new methodologies to solve this prob ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
The problem of Similarity Searching consists in finding the elements from a set which are similar to a given query under some criterion. If the similarity is expressed by means of a metric, the problem is called Metric Space Searching. In this thesis we present new methodologies to solve this problem using graphs G(V,E) to represent the metric database. In G, the set V corresponds to the objects from the metric space and E to a small subset of edges from V × V, whose weights are computed according to the metric of the space under consideration. In particular, we study knearest neighbor graphs (knngs). The knng is a weighted graph connecting each element from V —or equivalently, each object from the metric space — to its k nearest neighbors. We develop algorithms both to construct knngs in general metric spaces, and to use
Linesegment intersection made inplace
, 2007
"... We present a spaceefficient algorithm for reporting all k intersections induced by a set of n line segments in the plane. Our algorithm is an inplace variant of Balaban’s algorithm and, in the worst case, runs in O(n log2 n+k) time using O(1) extra words of memory in addition to the space used f ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
We present a spaceefficient algorithm for reporting all k intersections induced by a set of n line segments in the plane. Our algorithm is an inplace variant of Balaban’s algorithm and, in the worst case, runs in O(n log2 n+k) time using O(1) extra words of memory in addition to the space used for the input to the algorithm.
Optimal incremental sorting
 In Proc. 8th Workshop on Algorithm Engineering and Experiments (ALENEX
, 2006
"... Let A be a set of size m. Obtaining the first k ≤ m elements of A in ascending order can be done in optimal O(m+k log k) time. We present an algorithm (online on k) which incrementally gives the next smallest element of the set, so that the first k elements are obtained in optimal time for any k. We ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
Let A be a set of size m. Obtaining the first k ≤ m elements of A in ascending order can be done in optimal O(m+k log k) time. We present an algorithm (online on k) which incrementally gives the next smallest element of the set, so that the first k elements are obtained in optimal time for any k. We also give a practical algorithm with the same complexity on average, which improves in practice the existing online algorithm. As a direct application, we use our technique to implement Kruskal’s Minimum Spanning Tree algorithm, where our solution is competitive with the best current implementations. We finally show that our technique can be applied to several other problems, such as obtaining an interval of the sorted sequence and implementing heaps. 1
Optimal cacheoblivious implicit dictionaries
 In Proceedings of the 30th International Colloquium on Automata, Languages and Programming
, 2003
"... Abstract. We consider the issues of implicitness and cacheobliviousness in the classical dictionary problem for n distinct keys over an unbounded and ordered universe. One finding in this paper is that of closing the longstanding open problem about the existence of an optimal implicit dictionary ov ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Abstract. We consider the issues of implicitness and cacheobliviousness in the classical dictionary problem for n distinct keys over an unbounded and ordered universe. One finding in this paper is that of closing the longstanding open problem about the existence of an optimal implicit dictionary over an unbounded universe. Another finding is motivated by the antithetic features of implicit and cacheoblivious models in data structures. We show how to blend their best qualities achieving O(log n) time and O(log B n) block transfers for searching and for amortized updating, while using just n memory cells like sorted arrays and heaps. As a result, we avoid space wasting and provide fast data access at any level of the memory hierarchy. 1
Lower and Upper Bounds on Obtaining History Independence
, 2003
"... History independent data structures, presented by Micciancio, are data structures that possess a strong security property: even if an intruder manages to get a copy of the data structure, the memory layout of the structure yields no additional information on the data structure beyond its content. In ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
History independent data structures, presented by Micciancio, are data structures that possess a strong security property: even if an intruder manages to get a copy of the data structure, the memory layout of the structure yields no additional information on the data structure beyond its content. In particular, the history of operations applied on the structure is not visible in its memory layout. Naor and Teague proposed a stronger notion of history independence in which the intruder may break into the system several times without being noticed and still obtain no additional information from reading the memory layout of the data structure.
Amortization Results for Chromatic Search Trees, with an Application to Priority Queues
, 1997
"... this paper, we prove that only an amortized constant amount of rebalancing is necessary after an update in a chromatic search tree. We also prove that the amount of rebalancing done at any particular level decreases exponentially, going from the leaves toward the root. These results imply that, in p ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
this paper, we prove that only an amortized constant amount of rebalancing is necessary after an update in a chromatic search tree. We also prove that the amount of rebalancing done at any particular level decreases exponentially, going from the leaves toward the root. These results imply that, in principle, a linear number of processes can access the tree simultaneously. We have included one interesting application of chromatic trees. Based on these trees, a priority queue with possibilities for a greater degree of parallelism than previous proposals can be implemented. ] 1997 Academic Press 1.
Algorithm Selection for Sorting and Probabilistic Inference: A Machine LearningBased Approach
 KANSAS STATE UNIVERSITY
, 2003
"... The algorithm selection problem aims at selecting the best algorithm for a given computational problem instance according to some characteristics of the instance. In this dissertation, we first introduce some results from theoretical investigation of the algorithm selection problem. We show, by Rice ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The algorithm selection problem aims at selecting the best algorithm for a given computational problem instance according to some characteristics of the instance. In this dissertation, we first introduce some results from theoretical investigation of the algorithm selection problem. We show, by Rice's theorem, the nonexistence of an automatic algorithm selection program based only on the description of the input instance and the competing algorithms. We also describe an abstract theoretical framework of instance hardness and algorithm performance based on Kolmogorov complexity to show that algorithm selection for search is also incomputable. Driven by the theoretical results, we propose a machine learningbased inductive approach using experimental algorithmic methods and machine learning techniques to solve the algorithm selection problem. Experimentally, we have