Results 11  20
of
52
A framework for speeding up priorityqueue operations
, 2004
"... Abstract. We introduce a framework for reducing the number of element comparisons performed in priorityqueue operations. In particular, we give a priority queue which guarantees the worstcase cost of O(1) per minimum finding and insertion, and the worstcase cost of O(log n) with at most log n + O ..."
Abstract

Cited by 8 (8 self)
 Add to MetaCart
Abstract. We introduce a framework for reducing the number of element comparisons performed in priorityqueue operations. In particular, we give a priority queue which guarantees the worstcase cost of O(1) per minimum finding and insertion, and the worstcase cost of O(log n) with at most log n + O(1) element comparisons per minimum deletion and deletion, improving the bound of 2log n + O(1) on the number of element comparisons known for binomial queues. Here, n denotes the number of elements stored in the data structure prior to the operation in question, and log n equals max {1,log 2 n}. We also give a priority queue that provides, in addition to the abovementioned methods, the prioritydecrease (or decreasekey) method. This priority queue achieves the worstcase cost of O(1) per minimum finding, insertion, and priority decrease; and the worstcase cost of O(log n) with at most log n + O(log log n) element comparisons per minimum deletion and deletion. CR Classification. E.1 [Data Structures]: Lists, stacks, and queues; E.2 [Data
On the adaptiveness of quicksort
 IN: WORKSHOP ON ALGORITHM ENGINEERING & EXPERIMENTS, SIAM
, 2005
"... Quicksort was first introduced in 1961 by Hoare. Many variants have been developed, the best of which are among the fastest generic sorting algorithms available, as testified by the choice of Quicksort as the default sorting algorithm in most programming libraries. Some sorting algorithms are adapti ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Quicksort was first introduced in 1961 by Hoare. Many variants have been developed, the best of which are among the fastest generic sorting algorithms available, as testified by the choice of Quicksort as the default sorting algorithm in most programming libraries. Some sorting algorithms are adaptive, i.e. they have a complexity analysis which is better for inputs which are nearly sorted, according to some specified measure of presortedness. Quicksort is not among these, as it uses Ω(n log n) comparisons even when the input is already sorted. However, in this paper we demonstrate empirically that the actual running time of Quicksort is adaptive with respect to the presortedness measure Inv. Differences close to a factor of two are observed between instances with low and high Inv value. We then show that for the randomized version of Quicksort, the number of element swaps performed is provably adaptive with respect to the measure Inv. More precisely, we prove that randomized Quicksort performs expected O(n(1+log(1+ Inv/n))) element swaps, where Inv denotes the number of inversions in the input sequence. This result provides a theoretical explanation for the observed behavior, and gives new insights on the behavior of the Quicksort algorithm. We also give some empirical results on the adaptive behavior of Heapsort and Mergesort.
Algorithm Selection for Sorting and Probabilistic Inference: A Machine LearningBased Approach
 KANSAS STATE UNIVERSITY
, 2003
"... The algorithm selection problem aims at selecting the best algorithm for a given computational problem instance according to some characteristics of the instance. In this dissertation, we first introduce some results from theoretical investigation of the algorithm selection problem. We show, by Rice ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The algorithm selection problem aims at selecting the best algorithm for a given computational problem instance according to some characteristics of the instance. In this dissertation, we first introduce some results from theoretical investigation of the algorithm selection problem. We show, by Rice's theorem, the nonexistence of an automatic algorithm selection program based only on the description of the input instance and the competing algorithms. We also describe an abstract theoretical framework of instance hardness and algorithm performance based on Kolmogorov complexity to show that algorithm selection for search is also incomputable. Driven by the theoretical results, we propose a machine learningbased inductive approach using experimental algorithmic methods and machine learning techniques to solve the algorithm selection problem. Experimentally, we have
A meticulous analysis of mergesort programs
 in Proceedings of the 3rd Italian Conference on Algorithms and Complexity, Lecture Notes in Computer Science 1203, SpringerVerlag
, 1997
"... Abstract. The efficiency of mergesort programs is analysed under a simple unitcost model. In our analysis the time performance of the sorting programs includes the costs of key comparisons, element moves and address calculations. The goal is to establish the best possible timebound relative to th ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. The efficiency of mergesort programs is analysed under a simple unitcost model. In our analysis the time performance of the sorting programs includes the costs of key comparisons, element moves and address calculations. The goal is to establish the best possible timebound relative to the model when sorting n integers. By the wellknown informationtheoretic argument n log 2 n O(n) is a lower bound for the integersorting problem in our framework. New implementations for twoway and fourway bottomup mergesort are given, the worstcase complexities of which are shown to be bounded by 5.5nlog 2 n + O(n) and 3.25nlog 2 n + O(n), respectively. The theoretical findings are backed up with a series of experiments which show the practical relevance of our analysis when implementing library routines for internalmemory computations. 1
Affordable Fault Tolerance through Adaptation
 Parallel and Distributed Processing, Lecture Notes in Computer Science 1388
, 1998
"... . Faulttolerant programs are typically not only difficult to implement but also incur extra costs in terms of performance or resource consumption. Failures are typically relatively rare but the faulttolerance overhead must be paid regardless if any failures occur during the program execution. This ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
. Faulttolerant programs are typically not only difficult to implement but also incur extra costs in terms of performance or resource consumption. Failures are typically relatively rare but the faulttolerance overhead must be paid regardless if any failures occur during the program execution. This paper presents an approach that reduces the cost of faulttolerance, namely, adaptations to a change in failure model. In particular, a program that assumes no failures (or only benign failures) is combined with a component that is responsible for detecting if failures occur and then switching to a faulttolerant algorithm. Provided that the detection and adaptation mechanisms are not too expensive, this approach results in a program with smaller faulttolerance overhead and thus a better performance than a traditional faulttolerant program. Thus, the high cost of faulttolerance is only paid when failures actually occur. 1 Introduction Fault tolerance is becoming increasingly important w...
An Efficient Referencebased Approach to Outlier Detection in Large Datasets
"... A bottleneck to detecting distance and density based outliers is that a nearestneighbor search is required for each of the data points, resulting in a quadratic number of pairwise distance evaluations. In this paper, we propose a new method that uses the relative degree of density with respect to a ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
A bottleneck to detecting distance and density based outliers is that a nearestneighbor search is required for each of the data points, resulting in a quadratic number of pairwise distance evaluations. In this paper, we propose a new method that uses the relative degree of density with respect to a fixed set of reference points to approximate the degree of density defined in terms of nearest neighbors of a data point. The running time of our algorithm based on this approximation is O(Rn log n) where n is the size of dataset and R is the number of reference points. Candidate outliers are ranked based on the outlier score assigned to each data point. Theoretical analysis and empirical studies show that our method is effective, efficient, and highly scalable to very large datasets. 1
Numerical Representations as HigherOrder Nested Datatypes
, 1998
"... Number systems serve admirably as templates for container types: a container object of size n is modelled after the representation of the number n and operations on container objects are modelled after their numbertheoretic counterparts. Binomial queues are probably the first data structure that wa ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Number systems serve admirably as templates for container types: a container object of size n is modelled after the representation of the number n and operations on container objects are modelled after their numbertheoretic counterparts. Binomial queues are probably the first data structure that was designed with this analogy in mind. In this paper we show how to express these socalled numerical representations as higherorder nested datatypes. A nested datatype allows to capture the structural invariants of a numerical representation, so that the violation of an invariant can be detected at compiletime. We develop a programming method which allows to adapt algorithms to the new representation in a mostly straightforward manner. The framework is employed to implement three different container types: binary randomaccess lists, binomial queues, and 23 finger search trees. The latter data structure, which is treated in some depth, can be seen as the main innovation from a datastruct...
Optimal adaptive algorithms for finding the nearest and farthest point on a parametric blackbox curve
 In Proceedings of the 20th Annual ACM Symposium on Computational Geometry
, 2004
"... We consider a general model for representing and manipulating parametric curves, in which a curve is specified by a black box mapping a parameter value between 0 and 1 to a point in Euclidean dspace. In this model, we consider the nearestpointoncurve and farthestpointoncurve problems: given a ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We consider a general model for representing and manipulating parametric curves, in which a curve is specified by a black box mapping a parameter value between 0 and 1 to a point in Euclidean dspace. In this model, we consider the nearestpointoncurve and farthestpointoncurve problems: given a curve C and a point p, find a point on C nearest to p or farthest from p. In the general blackbox model, no algorithm can solve these problems. Assuming a known bound on the speed of the curve (a Lipschitz condition), the answer can be estimated up to an additive error of ε using O(1/ε) samples, and this bound is tight in the worst case. However, many instances can be solved with substantially fewer samples, and we give algorithms that adapt to the inherent difficulty of the particular instance, up to a logarithmic factor. More precisely, if OPT(C, p, ε) is the minimum number of samples of C that every correct algorithm must perform to achieve tolerance ε, then our algorithm performs O(OPT(C, p, ε) log(ε −1 /OPT(C, p, ε))) samples. Furthermore, any algorithm requires Ω(k log(ε −1 /k)) samples for some instance C ′ with OPT(C ′ , p, ε) = k; except that, for the nearestpointoncurve problem when the distance between C and p is less than ε, OPT is 1 but the upper and lower bounds on the number of samples are both Θ(1/ε). When bounds on relative error are desired, we give algorithms that perform O(OPT · log(2 + (1 + ε −1) · m −1 /OPT)) samples (where m is the exact minimum or maximum distance from p to C) and prove that Ω(OPT·log(1/ε)) samples are necessary on some problem instances. 1.
Finger Search Trees
, 2005
"... One of the most studied problems in computer science is the problem of maintaining a sorted sequence of elements to facilitate efficient searches. The prominent solution to the problem is to organize the sorted sequence as a balanced search tree, enabling insertions, deletions and searches in logari ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
One of the most studied problems in computer science is the problem of maintaining a sorted sequence of elements to facilitate efficient searches. The prominent solution to the problem is to organize the sorted sequence as a balanced search tree, enabling insertions, deletions and searches in logarithmic time. Many different search trees have been developed and studied intensively in the literature. A discussion of balanced binary search trees can e.g. be found in [4]. This chapter is devoted to finger search trees which are search trees supporting fingers, i.e. pointers, to elements in the search trees and supporting efficient updates and searches in the vicinity of the fingers. If the sorted sequence is a static set of n elements then a simple and space efficient representation is a sorted array. Searches can be performed by binary search using 1+⌊log n⌋ comparisons (we throughout this chapter let log x denote log 2 max{2, x}). A finger search starting at a particular element of the array can be performed by an exponential search by inspecting elements at distance 2 i − 1 from the finger for increasing i followed by a binary search in a range of 2 ⌊log d ⌋ − 1 elements, where d is the rank difference in the sequence between the finger and the search element. In Figure 11.1 is shown an exponential search for the element 42 starting at 5. In the example d = 20. An exponential search requires
Deterministic algorithm for the tthreshold set problem
 Lecture Notes in Computer Science
, 2003
"... Abstract. Given k sorted arrays, the tThreshold problem, which is motivated by indexed search engines, consists of finding the elements which are present in at least t of the arrays. We present a new deterministic algorithm for it and prove that, asymptotically in the sizes of the arrays, it is opt ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Abstract. Given k sorted arrays, the tThreshold problem, which is motivated by indexed search engines, consists of finding the elements which are present in at least t of the arrays. We present a new deterministic algorithm for it and prove that, asymptotically in the sizes of the arrays, it is optimal in the alternation model used to study adaptive algorithms. We define the OptThreshold problem as finding the smallest non empty tthreshold set, which is equivalent to find the largest t such that the tthreshold set is non empty, and propose a naive algorithm to solve it.