Results 1  10
of
14
Discrete Loops and Worst Case Performance
 Computer Languages
, 1994
"... In this paper socalled discrete loops are introduced which narrow the gap between general loops (e.g. while or repeatloops) and forloops. Alt hough discrete loops can be used for applications that would otherwise require general loops, discrete loops are known to complete in any case. Furthe ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
In this paper socalled discrete loops are introduced which narrow the gap between general loops (e.g. while or repeatloops) and forloops. Alt hough discrete loops can be used for applications that would otherwise require general loops, discrete loops are known to complete in any case. Furthermore it is possible to determine the number of iterations of a discrete loop, while this is trivial to do for forloops and extremely difficult for general loops. Thus discrete loops form an ideal framework for determining the worst case timing behavior of a program and they are especially useful in implementing realtime systems and proving such systems correct.
DataFlow Frameworks for WorstCase Execution Time Analysis
 RealTime Systems
, 2000
"... The purpose of this paper is to introduce frameworks based on dataflow equations which provide for estimating the worstcase execution time (WCET) of (realtime) programs. These frameworks allow several different WCET analysis techniques, which range from nave approaches to exact analysis, provided ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
The purpose of this paper is to introduce frameworks based on dataflow equations which provide for estimating the worstcase execution time (WCET) of (realtime) programs. These frameworks allow several different WCET analysis techniques, which range from nave approaches to exact analysis, provided exact knowledge on the program behaviour is available. However, dataflow frameworks can also be used for symbolic analysis based on information derived automatically from the source code of the program. As a byproduct we show that slightly modified elimination methods can be employed for solving WCET dataflow equations, while iteration algorithms cannot be used for this purpose.
Practical InPlace Mergesort
, 1996
"... Two inplace variants of the classical mergesort algorithm are analysed in detail. The first, straightforward variant performs at most N log 2 N + O(N ) comparisons and 3N log 2 N + O(N ) moves to sort N elements. The second, more advanced variant requires at most N log 2 N + O(N ) comparisons and " ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Two inplace variants of the classical mergesort algorithm are analysed in detail. The first, straightforward variant performs at most N log 2 N + O(N ) comparisons and 3N log 2 N + O(N ) moves to sort N elements. The second, more advanced variant requires at most N log 2 N + O(N ) comparisons and "N log 2 N moves, for any fixed " ? 0 and any N ? N ("). In theory, the second one is superior to advanced versions of heapsort. In practice, due to the overhead in the index manipulation, our fastest inplace mergesort behaves still about 50 per cent slower than the bottomup heapsort. However, our implementations are practical compared to mergesort algorithms based on inplace merging. Key words: sorting, mergesort, inplace algorithms CR Classification: F.2.2 1.
An InPlace Sorting with O(n log n) Comparisons and O(n) Moves
 In Proc. 44th Annual IEEE Symposium on Foundations of Computer Science
, 2003
"... Abstract. We present the first inplace algorithm for sorting an array of size n that performs, in the worst case, at most O(n log n) element comparisons and O(n) element transports. This solves a longstanding open problem, stated explicitly, e.g., in [J.I. Munro and V. Raman, Sorting with minimum ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Abstract. We present the first inplace algorithm for sorting an array of size n that performs, in the worst case, at most O(n log n) element comparisons and O(n) element transports. This solves a longstanding open problem, stated explicitly, e.g., in [J.I. Munro and V. Raman, Sorting with minimum data movement, J. Algorithms, 13, 374–93, 1992], of whether there exists a sorting algorithm that matches the asymptotic lower bounds on all computational resources simultaneously.
On the Performance of WEAKHEAPSORT
, 2000
"... . Dutton #1993# presents a further HEAPSORT variant called WEAKHEAPSORT, which also contains a new data structure for priority queues. The sorting algorithm and the underlying data structure are analyzed showing that WEAKHEAPSORT is the best HEAPSORT variant and that it has a lot of nice propert ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
. Dutton #1993# presents a further HEAPSORT variant called WEAKHEAPSORT, which also contains a new data structure for priority queues. The sorting algorithm and the underlying data structure are analyzed showing that WEAKHEAPSORT is the best HEAPSORT variant and that it has a lot of nice properties. It is shown that the worst case number of comparisons is ndlog ne# 2 dlog ne + n #dlog ne#nlog n +0:1nand weak heaps can be generated with n # 1 comparisons. A doubleended priority queue based on weakheaps can be generated in n + dn=2e#2 comparisons. Moreover, examples for the worst and the best case of WEAKHEAPSORT are presented, the number of WeakHeaps on f1;:::;ng is determined, and experiments on the average case are reported. 1
On the Number of Heaps and the Cost of Heap Construction
, 2001
"... Heaps constitute a wellknown data structure allowing the implementation of an e#cient O(n log n) sorting algorithm as well as the design of fast priority queues. Although heaps have been known for long, their combinatorial properties are still partially worked out: exact summation formulae have be ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Heaps constitute a wellknown data structure allowing the implementation of an e#cient O(n log n) sorting algorithm as well as the design of fast priority queues. Although heaps have been known for long, their combinatorial properties are still partially worked out: exact summation formulae have been stated, but most of the asymptotic behaviors are still unknown. In this paper, we present a number of general (not restricting to special subsequences) asymptotic results that give insight on the di#culties encountered in the asymptotic study of the number of heaps of a given size and of the cost of heap construction. In particular we exhibit the influence of arithmetic functions in the apparently chaotic behavior of these quantities. It is also shown that the distribution function of the cost of heap construction using Floyd's algorithm and other variants is asymptotically normal. 1
The Ultimate Heapsort
 In Proceedings of the Computing: the 4th Australasian Theory Symposium, Australian Computer Science Communications
, 1998
"... . A variant of Heapsortnamed Ultimate Heapsortis presented that sorts n elements inplace in \Theta(n log 2 (n+ 1)) worstcase time by performing at most n log 2 n + \Theta(n) key comparisons and n log 2 n + \Theta(n) element moves. The secret behind Ultimate Heapsort is that it occasionally ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
. A variant of Heapsortnamed Ultimate Heapsortis presented that sorts n elements inplace in \Theta(n log 2 (n+ 1)) worstcase time by performing at most n log 2 n + \Theta(n) key comparisons and n log 2 n + \Theta(n) element moves. The secret behind Ultimate Heapsort is that it occasionally transforms the heap it operates with to a twolayer heap which keeps small elements at the leaves. Basically, Ultimate Heapsort is like BottomUp Heapsort but, due to the twolayer heap property, an element taken from a leaf has to be moved towards the root only O(1) levels, on an average. Let a[1::n] be an array of n elements each consisting of a key and some information associated with this key. This array is a (maximum) heap if, for all i 2 f2; : : : ; ng, the key of element a[bi=2c] is larger than or equal to that of element a[i]. That is, a heap is a pointerfree representation of a left complete binary tree, where the elements stored are partially ordered according to their keys. Ele...
Parallel PointerBased Join Algorithms in Memory Mapped Environments
 In Proc. of the 12th IEEE Int. Conf. on Data Engineering
, 1996
"... Three pointerbased parallel join algorithms are presented and analyzed for environments in which secondary storage is made transparent to the programmer through memory mapping. Buhr, Goel, and Wai [11] have shown that data structures such as BTrees, RTrees and graph data structures can be impleme ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Three pointerbased parallel join algorithms are presented and analyzed for environments in which secondary storage is made transparent to the programmer through memory mapping. Buhr, Goel, and Wai [11] have shown that data structures such as BTrees, RTrees and graph data structures can be implemented as efficiently and effectively in this environment as in a traditional environment using explicit I/O. Here we show how higherorder algorithms, in particular parallel join algorithms, behave in a memory mapped environment. A quantitative analytical model has been developed to conduct performance analysis of the parallel join algorithms. The model has been validated by experiments. 1 Introduction Programmers working with complex and possibly large persistent data structures are faced with the problem that there are two, mostly incompatible, views of structured data, namely data in primary and secondary storage. In primary storage, pointers are used to construct complex relationships a...
3 is a More Promising Algorithmic Parameter Than 2
 Comput. Math. Appl
, 1998
"... In this paper we have observed and shown that ternary systems are more promising than the more traditional binary systems used in computers. In particular, ternary number system, heaps on ternary trees, and quicksort with 3 partitions do indicate some theoretical advantages over the more established ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper we have observed and shown that ternary systems are more promising than the more traditional binary systems used in computers. In particular, ternary number system, heaps on ternary trees, and quicksort with 3 partitions do indicate some theoretical advantages over the more established binary systems. The magic Napierian e plays the crucial role to establish the results. The experimental data, supporting the analysis, have also been presented. Keywords: Analysis of algorithms; Performance evaluation; Quicksort; Heaps; Divide and conquer technique 1 Introduction With the invention of computers, 2parametric algebra, number system and graphs among other systems started to flourish with accelerated speed. Boolean algebra got its important applications in computer technology, binary number system has occupied the core of computer arithmetic, and binary trees have become inseparable in Revised version of ref. no. CAM 2974. y Corresponding Author. mathematical analysis...
Analysis of Algorithms (AofA): Part I: 1993  1998 ("Dagstuhl Period")
"... This is the rst installment of the Algorithmics Column dedicated to Analysis of Algorithms (AofA) that sometimes goes under the name AverageCase Analysis of Algorithms or Mathematical Analysis of Algorithms. The area of analysis of algorithms (at least, the way we understand it here) was born on ..."
Abstract
 Add to MetaCart
This is the rst installment of the Algorithmics Column dedicated to Analysis of Algorithms (AofA) that sometimes goes under the name AverageCase Analysis of Algorithms or Mathematical Analysis of Algorithms. The area of analysis of algorithms (at least, the way we understand it here) was born on July 27, 1963, when D. E. Knuth wrote his \Notes on Open Addressing". Since 1963 the eld has been undergoing substantial changes. We report here how it evolved since then. For a long time this area of research did not have a real \home". But in 1993 the rst seminar entirely devoted to analysis of algorithms took place in Dagstuhl, Germany. Since then seven seminars were organized, and in this column we briey summarize the rst three meetings held in Schloss Dagstuhl (thus \Dagstuhl Period") and discuss various scienti c activities that took place, describing some research problems, solutions, and open problems discussed during these meetings. In addition, we describe three special issues dedicated to these meetings.