Results 11  20
of
105
Efficient Simulation of Multiple Cache Configurations using Binomial Trees
, 1991
"... Simulation time is often the bottleneck in the cache design process. In this paper, algorithms for the efficient simulation of direct mapped and set associative caches are presented. Two classes of direct mapped caches are considered: fixed line size caches and fixed size caches. A binomial tree rep ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
(Show Context)
Simulation time is often the bottleneck in the cache design process. In this paper, algorithms for the efficient simulation of direct mapped and set associative caches are presented. Two classes of direct mapped caches are considered: fixed line size caches and fixed size caches. A binomial tree representation of the caches in each class is introduced. The fixed line size class is considered for set associative caches. A generalization of the binomial tree data structure is introduced and the fixed line size class of set associative caches is represented using the generalized binomial tree. Algorithms are developed that use the data structures to determine miss ratios for the caches in each class. Analytical and empirical comparisons of the algorithms to previously published algorithms such as allassociativity and forest simulation are presented. Analytically it is shown that the new algorithms always perform better than earlier algorithms. Empirically, the new algorithms are shown to...
Purely Functional RandomAccess Lists
 In Functional Programming Languages and Computer Architecture
, 1995
"... We present a new data structure, called a randomaccess list, that supports array lookup and update operations in O(log n) time, while simultaneously providing O(1) time list operations (cons, head, tail). A closer analysis of the array operations improves the bound to O(minfi; log ng) in the wor ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
(Show Context)
We present a new data structure, called a randomaccess list, that supports array lookup and update operations in O(log n) time, while simultaneously providing O(1) time list operations (cons, head, tail). A closer analysis of the array operations improves the bound to O(minfi; log ng) in the worst case and O(log i) in the expected case, where i is the index of the desired element. Empirical evidence suggests that this data structure should be quite efficient in practice. 1 Introduction Lists are the primary data structure in every functional programmer 's toolbox. They are simple, convenient, and usually quite efficient. The main drawback of lists is that accessing the ith element requires O(i) time. In such situations, functional programmers often find themselves longing for the efficient random access of arrays. Unfortunately, arrays can be quite awkward to implement in a functional setting, where previous versions of the array must be available even after an update. Since arra...
Checking mergeable priority queues
 In Digest of the 24th Symposium on FaultTolerant Computing
, 1994
"... ..."
(Show Context)
Algorithms for Learning by Distances
 Information and Computation
, 2001
"... We consider the information complexity of learning in metric spaces. We discuss two models of such learning processes. The first one is the Learning By Distances (LBD) model of BenDavid et al [BIK]. In this model a concept is a point in a metric space, at each step of the learning process the st ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
We consider the information complexity of learning in metric spaces. We discuss two models of such learning processes. The first one is the Learning By Distances (LBD) model of BenDavid et al [BIK]. In this model a concept is a point in a metric space, at each step of the learning process the student offers a hypothesis and receives from the teacher an approximation of its distance to the target. We also present a new Relative Distances (RD) model. In this model, at each step, the student presents two points and receives a bit indicating which of them is closer to the target. We investigate the learning complexity in both models. We provide general lower and upper bounds on the complexity of learning concept classes in these models. We then analyze the complexity of several natural concept classes in two metric spaces; the space of boolean formulas with the metric induced by the number of satisfying assignments and spaces defined on graphs with the metric induced by the length of the shortest path between pairs of nodes. 1
Theory of 23 Heaps
 In Computing and Combinatorics, volume 1627 of LNCS
, 1999
"... . As an alternative to the Fibonacci heap, we design a new data structure called a 23 heap, which supports m decreasekey, and n insert operations and deletemin operations in O(m + n log n) time. The merit of the 23 heap is that it is conceptually simpler and easier to implement. The new data ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
(Show Context)
. As an alternative to the Fibonacci heap, we design a new data structure called a 23 heap, which supports m decreasekey, and n insert operations and deletemin operations in O(m + n log n) time. The merit of the 23 heap is that it is conceptually simpler and easier to implement. The new data structure will have a wide application in graph algorithms. 1 Introduction Since Fredman and Tarjan [7] published Fibonacci heaps in 1987, there has not been an easy alternative that can support n insert and deletemin operations, and m decreasekey operations in O(m+n log n) time. The relaxed heaps by Driscoll, et. al [6] have the same overall complexity with decreasekey in O(1) worst case time, but are di#cult to implement. Logarithm here is with base 2, unless otherwise specified. Two representative application areas for these operations will be the single source shortest path problem and the minimum cost spanning tree problem. Direct use of these operations in Dijkstra's [5] and Pri...
Optimal Purely Functional Priority Queues
 JOURNAL OF FUNCTIONAL PROGRAMMING
, 1996
"... Brodal recently introduced the first implementation of imperative priority queues to support findMin, insert, and meld in O(1) worstcase time, and deleteMin in O(log n) worstcase time. These bounds are asymptotically optimal among all comparisonbased priority queues. In this paper, we adapt B ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
Brodal recently introduced the first implementation of imperative priority queues to support findMin, insert, and meld in O(1) worstcase time, and deleteMin in O(log n) worstcase time. These bounds are asymptotically optimal among all comparisonbased priority queues. In this paper, we adapt Brodal's data structure to a purely functional setting. In doing so, we both simplify the data structure and clarify its relationship to the binomial queues of Vuillemin, which support all four operations in O(log n) time. Specifically, we derive our implementation from binomial queues in three steps: first, we reduce the running time of insert to O(1) by eliminating the possibility of cascading links; second, we reduce the running time of findMin to O(1) by adding a global root to hold the minimum element; and finally, we reduce the running time of meld to O(1) by allowing priority queues to contain other priority queues. Each of these steps is expressed using MLstyle functors. The last transformation, known as datastructural bootstrapping, is an interesting application of higherorder functors and recursive structures.
A General Technique for Implementation of Efficient Priority Queues
 In Proc. 3rd Israel Symposium on Theory of Computing and Systems
, 1994
"... This paper presents a very general technique for the implementation of mergeable priority queues. The amortized running time is O(log n) for DeleteMin and Delete, and \Theta(1) for all other standard operations. In particular, the operation DecreaseKey runs in amortized constant time. The worstca ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
This paper presents a very general technique for the implementation of mergeable priority queues. The amortized running time is O(log n) for DeleteMin and Delete, and \Theta(1) for all other standard operations. In particular, the operation DecreaseKey runs in amortized constant time. The worstcase running time is O(logn) or better for all operations. Several examples of mergeable priority queues are given. The examples include priority queues that are particular well suited for extenal storage. The space requirement is only two pointers and one information field per item. The technique is also used to implement mergeable, doubleended priority queues. For these queues, the worstcase time bound for insertion is \Theta(1), which improves the best previously known bound. For the other operations, the time bounds are the same as the best previously known bounds, worstcase as well as amortized. 1 Introduction A mergeable priority queue is one of the fundamental data types. It is used...
The Role of Lazy Evaluation in Amortized Data Structures
 In Proc. of the International Conference on Functional Programming
, 1996
"... Traditional techniques for designing and analyzing amortized data structures in an imperative setting are of limited use in a functional setting because they apply only to singlethreaded data structures, yet functional data structures can be nonsinglethreaded. In earlier work, we showed how lazy e ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
(Show Context)
Traditional techniques for designing and analyzing amortized data structures in an imperative setting are of limited use in a functional setting because they apply only to singlethreaded data structures, yet functional data structures can be nonsinglethreaded. In earlier work, we showed how lazy evaluation supports functional amortized data structures and described a technique (the banker's method) for analyzing such data structures. In this paper, we present a new analysis technique (the physicist's method) and show how one can sometimes derive a worstcase data structure from an amortized data structure by appropriately scheduling the premature execution of delayed components. We use these techniques to develop new implementations of FIFO queues and binomial queues. 1 Introduction Functional programmers have long debated the relative merits of strict versus lazy evaluation. Although lazy evaluation has many benefits [11], strict evaluation is clearly superior in at least one area:...
Fast Meldable Priority Queues
, 1995
"... We present priority queues that support the operations MakeQueue, FindMin, Insert and Meld in worst case time O(1) and Delete and DeleteMin in worst case time O(log n). They can be implemented on the pointer machine and require linear space. The time bounds are optimal for all implementations wh ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
We present priority queues that support the operations MakeQueue, FindMin, Insert and Meld in worst case time O(1) and Delete and DeleteMin in worst case time O(log n). They can be implemented on the pointer machine and require linear space. The time bounds are optimal for all implementations where Meld takes worst case time o(n).