Results 11  20
of
27
A Performance and Scalability Analysis Framework for Parallel Discrete Event Simulators
 J. Cryptology
, 1992
"... The development of efficient parallel discrete event simulators is hampered by the large number of interrelated factors affecting performance. This problem is made more difficult by the lack of scalable representative models that can be used to analyze optimizations and isolate bottlenecks. This pap ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The development of efficient parallel discrete event simulators is hampered by the large number of interrelated factors affecting performance. This problem is made more difficult by the lack of scalable representative models that can be used to analyze optimizations and isolate bottlenecks. This paper proposes a performance and scalabilty analysis framework (PSAF) for parallel discrete event simulators. PSAF is built on a platformindependent workload specification language (WSL). WSL is a language that represents simulation models using a set of fundamental performancecritical parameters. For each simulator under study, a WSL translator generates synthetic platformspecific simulation models that conform to the performance and scalability characteristics specified by the WSL description. Moreover, sets of portable simulation models that explore the effects of the different parameters, individually or collectively, on the execution performance can easily be constructed using the synthetic workload generator (SWG). The SWG automatically generates simulation workloads with different performance properties. In addition, PSAF supports the seamless integration of real simulation models into the workload specification. Thus, a benchmark with both real and synthetically generated models can be built allowing for realistic and thorough exploration of the performance space. The utility of PSAF in determining the boundaries of performance and scalability of simulation environments and models is demonstrated.
Strict Fibonacci Heaps
 STOC
, 2012
"... We present the first pointerbased heap implementation with time bounds matching those of Fibonacci heaps in the worst case. We support makeheap, insert, findmin, meld and decreasekey in worstcase O(1) time, and delete and deletemin in worstcase O(lgn) time, where n is the size of the heap. The ..."
Abstract
 Add to MetaCart
(Show Context)
We present the first pointerbased heap implementation with time bounds matching those of Fibonacci heaps in the worst case. We support makeheap, insert, findmin, meld and decreasekey in worstcase O(1) time, and delete and deletemin in worstcase O(lgn) time, where n is the size of the heap. The data structure uses linear space. A previous, very complicated, solution achieving the same time bounds in the RAM model made essential use of arrays and extensive use of redundant counter schemes to maintain balance. Our solution uses neither. Our key simplification is to discard the structure of the smaller heap when doing a meld. We use the pigeonhole principle in place of the redundant counter mechanism.
The Asymptotic Number of Leftist Trees
, 2000
"... It is shown that the number of leftist trees of size n in a simply generated familiy of trees is asymptotically given by c n n 3=2 with certain constants > 0 and c > 1. Furthermore it is proved that the number of leaves in leftist trees of with n nodes satises a central limit theorem. 1 ..."
Abstract
 Add to MetaCart
(Show Context)
It is shown that the number of leftist trees of size n in a simply generated familiy of trees is asymptotically given by c n n 3=2 with certain constants > 0 and c > 1. Furthermore it is proved that the number of leaves in leftist trees of with n nodes satises a central limit theorem. 1 Introduction Let T denote a rooted ordered tree T , i.e. its branches and subbranches etc. are ordered from left to right. The left branch length LBL(T ) of T is dened by the distance d(r; a) from the root r to the leftmost leaf a of T . (The distance d(v; w) of two nodes v; w 2 V (T ) denotes the number of edged of the unique path containing v and w.) Furthermore, for every vertex v 2 V (T ), let T v denote the subtree of T rooted by v. Finally, L(T ) denotes the set of leaves of T . A rooted ordered tree T is called leftist tree if for all v 2 V (T ) LBL(T v ) = min w2L(Tv ) d(v; w): Binary leftist trees have been introduced by Crane [4], see also Knuth [13, pp. 150152, 157158]. La...
RESEARCH CONlRlWlIONS Algorithms and Data Structures Pairing Heaps:
"... ABSTRACT: The pairing heap has recently been introduced as a new data structure for priority queues. Pairing heaps are extremely simple to implement and seem to be very efficient in practice, but they are difficult to analyze theoretically, and open problems remain. It has been conjectured that they ..."
Abstract
 Add to MetaCart
ABSTRACT: The pairing heap has recently been introduced as a new data structure for priority queues. Pairing heaps are extremely simple to implement and seem to be very efficient in practice, but they are difficult to analyze theoretically, and open problems remain. It has been conjectured that they achieve the same amortized time bounds as Fibonacci heaps, namely, O(log n) time for delete and deletemin and O(1) for all other operations, where n is the size of the priority queue at the time of the operation. We provide empirical evidence that supports this conjecture. The most promising algorithm in our simulations is a new variant of the twopass method, called auxiliary twopass. We prove that, assuming no decreasekey operations are performed, it achieves the same amortized time bounds as Fibonacci heaps. 1.
General Terms Algorithms
"... Redblack trees and leftist heaps are classic data structures that are commonly taught in Data Structures (CS2) and/or Algorithms (CS7) courses. This paper describes alternatives to these two data structures that may offer pedagogical advantages for typical students. ..."
Abstract
 Add to MetaCart
(Show Context)
Redblack trees and leftist heaps are classic data structures that are commonly taught in Data Structures (CS2) and/or Algorithms (CS7) courses. This paper describes alternatives to these two data structures that may offer pedagogical advantages for typical students.
Correspondence Based Data Structures for Double Ended Priority Queues
"... this paper is to demonstrate the generality of two techniques used in [6] to develop an MDEPQ representation from an MPQ representation  height biased leftist trees. These methods  total correspondence and leaf correspondence  may be used to arrive at efficient DEPQ and MDEPQ data structures from ..."
Abstract
 Add to MetaCart
this paper is to demonstrate the generality of two techniques used in [6] to develop an MDEPQ representation from an MPQ representation  height biased leftist trees. These methods  total correspondence and leaf correspondence  may be used to arrive at efficient DEPQ and MDEPQ data structures from PQ and MPQ data structures such as the pairing heap [8; 18], Binomial and Fibonacci heaps [9], and Brodal's FMPQ [2] which also provide efficient support for the operation: Delete(Q,p): delete and return the element located at p We begin, in Section 2, by reviewing a rather straightforward way, dual priority queues, to obtain a (M)DEPQ structure from a (M)PQ structure. This method [2; 6] simply puts each element into both a minPQ and a maxPQ. In Section 3, we describe the total correspondence method and in Section 4, we describe leaf correspondence. Both sections provide examples of PQs and MPQs and the resulting DEPQs and MDEPQs. Section 5 gives complexity results. In Section 6, we provide the result of experiments that compare the performance of the MDEPQs based on height biased leftist tree [7], pairing heaps [8; 18], and FMPQs [2]. For reference purpose, we also provide run times for the splay tree data structure [16]. Although splay trees were not specifically designed to represent DEPQs, it is easy min Heap max Heap Fig. 1. Dual heap structure to use them for this purpose. Note that splay trees do not provide efficient support for the Meld operation
Queries and Fault Tolerance
"... The focus of this dissertation is on algorithms, in particular data structures that give provably efficient solutions for sequence analysis problems, range queries, and fault tolerant computing. The work presented in this dissertation is divided into three parts. In Part I we consider algorithms for ..."
Abstract
 Add to MetaCart
(Show Context)
The focus of this dissertation is on algorithms, in particular data structures that give provably efficient solutions for sequence analysis problems, range queries, and fault tolerant computing. The work presented in this dissertation is divided into three parts. In Part I we consider algorithms for a range of sequence analysis problems that have risen from applications in pattern matching, bioinformatics, and data mining. On a high level, each problem is defined by a function and some constraints and the job at hand is to locate subsequences that score high with this function and are not invalidated by the constraints. Many variants and similar problems have been proposed leading to several different approaches and algorithms. We consider problems where the function is the sum of the elements in the sequence and the constraints only bound the length of the subsequences considered. We give optimal algorithms for several variants of the problem based on a simple idea and classic algorithms and data structures. In Part II we consider range query data structures. This a category of
AND
, 1979
"... This paper introduces a rather general technique for computing the averagecase performance of dynamic data structures, subjected to arbitrary sequences of insert, delete, and search operations. The method allows us effectively to evaluate the integrated cost of various interesting data structure im ..."
Abstract
 Add to MetaCart
(Show Context)
This paper introduces a rather general technique for computing the averagecase performance of dynamic data structures, subjected to arbitrary sequences of insert, delete, and search operations. The method allows us effectively to evaluate the integrated cost of various interesting data structure implementations, for stacks, dictionaries, symbol tables, priority queues, and linear lists; it can thus be used as a basis for measuring the efficiency of each proposed implementation. For each data type, a specific continued fraction and a family of orthogonal polynomials are associated with sequences of operations: Tchebycheff for stacks, Laguerre for dictionaries, Charlier for symbol tables, Hermite for priority queues, and Meixner for linear lists. Our main result is an explicit expression, for each of the above data types, of the generating function for integrated costs, as a linear integral transform of the generating functions for individual operation costs. We use the result to compute explicitly integrated costs of various implementations of dictionaries and priority queues. 1.I~R~DUCTION The purpose of this paper is to describe a rather general technique for computing the average cost of a sequence of operations, which is applicable to many of the interesting known implementations of data structures in computer science.