Results 11  20
of
21
Adaptive (Analysis of) Algorithms for Convex Hulls and Related Problems
, 2008
"... Adaptive analysis is a well known technique in computational geometry, which refines the traditional worst case analysis over all instances of xed input size by taking into account some other parameters, such as the size of the output in the case of output sensitive analysis. We present two adaptive ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Adaptive analysis is a well known technique in computational geometry, which refines the traditional worst case analysis over all instances of xed input size by taking into account some other parameters, such as the size of the output in the case of output sensitive analysis. We present two adaptive techniques for the computation of the convex hull in two and three dimensions and related problems. The first analysis technique is based on the input order and yields results on the computation of convex hulls in two and three dimensions, and the first adaptive algorithm for Voronoi and Delaunay diagrams, through the entropy of a partition of the input in easier instances. The second analysis technique is based on the structural entropy of the instance, and yields results on the computational complexity of planar convex hull and of multiset sorting, through a generalization of output sensitivity and a more precise analysis of the complexity of Kirkpatrick and Seidel's algorithm. Our approach yields adaptive algorithms which perform faster on many classes of instances, while performing asymptotically no worse in the worst case over all instances of fixed size.
An Adaptive Generic Sorting Algorithm that Uses Variable Partitioning
 In preparation
, 1992
"... A sorting algorithm is adaptive if its run time for inputs of the same size n varies smoothly from O(n) to O(n log n) as the disorder of the input varies. It is well accepted that files that are already sorted are often sorted again and that many files occur naturally in nearly sorted state. Recentl ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
A sorting algorithm is adaptive if its run time for inputs of the same size n varies smoothly from O(n) to O(n log n) as the disorder of the input varies. It is well accepted that files that are already sorted are often sorted again and that many files occur naturally in nearly sorted state. Recently, researchers have focused their attention on sorting algorithms that are optimally adaptive with respect to several measures of disorder, (since the type of disorder in the input is unknown), and illustrating a need to develop tools for constructing adaptive algorithms for large classes of measures. We present a generic sorting algorithm that uses divideandconquer in which the number of subproblems depends on the disorder of the input and for which we can establish adaptivity with respect to an abstract measure. We present applications of this generic algorithm obtaining optimal adaptivity for several specific measures of disorder. Moreover, we define a randomized version of our generic ...
Partial Solution and Entropy
 MFCS 2009, LNCS 5734
, 2009
"... Abstract. If the given problem instance is partially solved, we want to minimize our effort to solve the problem using that information. In this paper we introduce the measure of entropy H(S) for uncertainty in partially solved input data S(X) = (X1,..., Xk), where X is the entire data set, and eac ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. If the given problem instance is partially solved, we want to minimize our effort to solve the problem using that information. In this paper we introduce the measure of entropy H(S) for uncertainty in partially solved input data S(X) = (X1,..., Xk), where X is the entire data set, and each Xi is already solved. We use the entropy measure to analyze three example problems, sorting, shortest paths and minimum spanning trees. For sorting Xi is an ascending run, and for shortest paths, Xi is an acyclic part in the given graph. For minimum spanning trees, Xi is interpreted as a partially obtained minimum spanning tree for a subgraph. The entropy measure, H(S), is defined by regarding pi = Xi/X  as a probability measure, that is, H(S) = −nΣ k i=1pi log pi, where n = Σ k i=1Xi. Then we show that we can sort the input data S(X) in O(H(S)) time, and solve the shortest path problem in O(m + H(S)) time where m is the number of edges of the graph. Finally we show that the minimum spanning tree is computed in O(m + H(S)) time. Keywords:entropy, complexity, adaptive sort, minimal mergesort, ascending runs, shortest paths, nearly acyclic graphs, minimum spanning trees 1
Adaptive techniques to find optimal planar boxes
 In CCCG
, 2012
"... Given a set P of n planar points, two axes and a realvalued score function f() on subsets of P, the Optimal Planar Box problem consists in finding a box (i.e. an axisaligned rectangle) H maximizing f(H ∩ P). We consider the case where f() is monotone decomposable, i.e. there exists a composition fu ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Given a set P of n planar points, two axes and a realvalued score function f() on subsets of P, the Optimal Planar Box problem consists in finding a box (i.e. an axisaligned rectangle) H maximizing f(H ∩ P). We consider the case where f() is monotone decomposable, i.e. there exists a composition function g() monotone in its two arguments such that f(A) = g(f(A1), f(A2)) for every subset A ⊆ P and every partition {A1, A2} of A. In this context we propose a solution for the Optimal Planar Box problem which performs in the worst case O(n2 lg n) score compositions and coordinate comparisons, and much less on other classes of instances defined by various measures of difficulty. A side result of its own interest is a fully dynamic MCS Splay tree data structure supporting insertions and deletions with the dynamic finger property, improving upon previous results [Cortés et al., J.Alg. 2009]. 1
From Time to Space: Fast Algorithms that yield Small and Fast Data Structures
"... Abstract. In many cases, the relation between encoding space and execution time translates into combinatorial lower bounds on the computational complexity of algorithms in the comparison or external memory models. We describe a few cases which illustrate this relation in a distinct direction, where ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. In many cases, the relation between encoding space and execution time translates into combinatorial lower bounds on the computational complexity of algorithms in the comparison or external memory models. We describe a few cases which illustrate this relation in a distinct direction, where fast algorithms inspire compressed encodings or data structures. In particular, we describe the relation between searching in an ordered array and encoding integers; merging sets and encoding a sequence of symbols; and sorting and compressing permutations.
External Sorting and Nearly Sortedness
"... The availability of large main memories and the new technologies for disk drives have modified the models for external sorting and have renewed interest in their study. Little is known about the performance of traditional and more recent sorting methods on nearly sorted files although such files are ..."
Abstract
 Add to MetaCart
The availability of large main memories and the new technologies for disk drives have modified the models for external sorting and have renewed interest in their study. Little is known about the performance of traditional and more recent sorting methods on nearly sorted files although such files are common in practice. ffl We confirm mathematically that the lengths of the runs created by replacement selection during the first phase of external sorting increases as the order in the input file increases. Previous work has concentrated on the expected length of initial runs when all input files are equally likely to occur. It has long been accepted that when an input file has little disorder, the lengths of the generated runs will be long. We establish such results for two measures of disorder, namely, the number of ascending runs and the maximal distance between inversions. ffl We demonstrate that, during the merging phase, the floatingbuffers technique not only reduces the sorting ti...
B.Tech. Project Report Part II
, 2002
"... In this paper we present two schemes to reduce disorder of given elements and thus improve the performance of adaptive merge sorting. Adaptive sorting algorithms utilize the presortedness present in a given sequence. In the rst scheme, amount of presortedness present in a sequence is probabilistica ..."
Abstract
 Add to MetaCart
In this paper we present two schemes to reduce disorder of given elements and thus improve the performance of adaptive merge sorting. Adaptive sorting algorithms utilize the presortedness present in a given sequence. In the rst scheme, amount of presortedness present in a sequence is probabilistically increased by using a swapping technique that requires little computation. In the second scheme alternate ascending and descending sequences present in the input are merged to decrease the disorder. In both the cases the analysis depends on a beautiful result about the average behaviour of permutaions which is stated and proved in the paper.
Using Learning and Difficulty of Prediction to Decrease Computation: A Fast Sort and Priority Queue on Entropy Bounded Inputs ∗
"... There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently, (e.g. see [Vitter,Krishnan,FOCS91], [Karlin,Philips,Raghavan,FOCS92] [Raghavan92]) for use of Markov models for online algorithms ..."
Abstract
 Add to MetaCart
There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently, (e.g. see [Vitter,Krishnan,FOCS91], [Karlin,Philips,Raghavan,FOCS92] [Raghavan92]) for use of Markov models for online algorithms e.g., cashing and prefetching). Their results used the fact that compressible sources are predictable (and vise versa), and show that online algorithms can improve their performance by prediction. Actual page access sequences are in fact somewhat compressible, so their predictive methods can be of benefit. This paper investigates the interesting idea of decreasing computation by using learning in the opposite way, namely to determine the difficulty of prediction. That is, we will approximately learn the input distribution, and then improve the performance of the computation when the input is not too predictable, rather than the reverse. To our knowledge, this is first case of a computational problem where we do not assume any particular fixed input distribution and yet computation is decreased when the input is less predictable, rather than the reverse. We concentrate our investigation on a basic computational problem: sorting and a basic data structure problem: maintaining a priority queue. We present the first known case of sorting and priority queue algorithms whose complexity depends on the binary entropy H ≤ 1 of input keys where assume that input keys are generated from an unknown but arbitrary stationary ergodic source. This is, we assume that each of the input keys can be each arbitrarily long, but have entropy H. Note that H
Entropy as Computational Complexity
"... Abstract. If the given problem instance is partially solved, we want to minimize our effort to solve the problem using that information. In this paper we introduce the measure of entropy, H(S), for uncertainty in partially solved input data S(X) = (X1,..., Xk), where X is the entire data set, and e ..."
Abstract
 Add to MetaCart
Abstract. If the given problem instance is partially solved, we want to minimize our effort to solve the problem using that information. In this paper we introduce the measure of entropy, H(S), for uncertainty in partially solved input data S(X) = (X1,..., Xk), where X is the entire data set, and each Xi is already solved. We propose a generic algorithm that merges Xi’s repeatedly, and finishes when k becomes 1. We use the entropy measure to analyze three example problems, sorting, shortest paths and minimum spanning trees. For sorting Xi is an ascending run, and for minimum spanning trees, Xi is interpreted as a partially obtained minimum spanning tree for a subgraph. For shortest paths, Xi is an acyclic part in the given graph. When k is small, the graph can be regarded as nearly acyclic. The entropy measure, H(S), is defined by regarding pi = Xi/X  as a probability measure, that is, H(S) = −nΣ k i=1pi log pi, where n = Σ k i=1Xi. We show that we can sort the input data S(X) in O(H(S)) time, and that we can complete the minimum cost spanning tree in O(m + H(S)) time, where m in the number of edges. Then we solve the shortest path problem in O(m + H(S)) time. Finally we define dual entropy on the partitioning process, whereby we give the time bounds on a generic quicksort and the shortest path problem for another kind of nearly acyclic graphs.
Check Sort: A New Improved `Intelligent' Version of Circular Sort Algorithm
"... A new sorting algorithm, called here Check Sort, is described. It is a new improved, intelligent version of the Circular Sort algorithm. It is of interest for the following reasons: it retains all the favourable features of Circular Sort and at the same time removes its main unfavourable feature, na ..."
Abstract
 Add to MetaCart
A new sorting algorithm, called here Check Sort, is described. It is a new improved, intelligent version of the Circular Sort algorithm. It is of interest for the following reasons: it retains all the favourable features of Circular Sort and at the same time removes its main unfavourable feature, namely use of extra space, by carrying out the sorting "in situ"; it is an intelligent sorting algorithm which first finds whether the input data is in roughly increasing, decreasing or random order and then applies a suitable sorting strategy; it is a comparison based general sorting algorithm; it does not put any restriction on the type of keys; it provides an example of an interesting application of a circular list. The Check Sort algorithm is compared with well known algorithms such as Insertion Sort, Heap Sort, Quick Sort; and also with some newer algorithms such as DeqSort, MinMaxSort, SublistMergeSort and Circular Sort in terms of times required to sort various input lists. Input lists ...