Results 1  10
of
52
Skip Lists: A Probabilistic Alternative to Balanced Trees
, 1990
"... Skip lists are a data structure that can be used in place of balanced trees. Skip lists use probabilistic balancing rather than strictly enforced balancing and as a result the algorithms for insertion and deletion in skip lists are much simpler and significantly faster than equivalent algorithms for ..."
Abstract

Cited by 330 (1 self)
 Add to MetaCart
Skip lists are a data structure that can be used in place of balanced trees. Skip lists use probabilistic balancing rather than strictly enforced balancing and as a result the algorithms for insertion and deletion in skip lists are much simpler and significantly faster than equivalent algorithms for balanced trees.
Randomized Search Trees
 ALGORITHMICA
, 1996
"... We present a randomized strategy for maintaining balance in dynamically changing search trees that has optimal expected behavior. In particular, in the expected case a search or an update takes logarithmic time, with the update requiring fewer than two rotations. Moreover, the update time remains ..."
Abstract

Cited by 139 (1 self)
 Add to MetaCart
We present a randomized strategy for maintaining balance in dynamically changing search trees that has optimal expected behavior. In particular, in the expected case a search or an update takes logarithmic time, with the update requiring fewer than two rotations. Moreover, the update time remains logarithmic, even if the cost of a rotation is taken to be proportional to the size of the rotated subtree. Finger searches and splits and joins can be performed in optimal expected time also. We show that these results continue to hold even if very little true randomness is available, i.e. if only a logarithmic number of truely random bits are available. Our approach generalizes naturally to weighted trees, where the expected time bounds for accesses and updates again match the worst case time bounds of the best deterministic methods. We also discuss ways of implementing our randomized strategy so that no explicit balance information is maintained. Our balancing strategy and our alg...
Adaptive Functional Programming
 IN PROCEEDINGS OF THE 29TH ANNUAL ACM SYMPOSIUM ON PRINCIPLES OF PROGRAMMING LANGUAGES
, 2001
"... An adaptive computation maintains the relationship between its input and output as the input changes. Although various techniques for adaptive computing have been proposed, they remain limited in their scope of applicability. We propose a general mechanism for adaptive computing that enables one to ..."
Abstract

Cited by 64 (23 self)
 Add to MetaCart
An adaptive computation maintains the relationship between its input and output as the input changes. Although various techniques for adaptive computing have been proposed, they remain limited in their scope of applicability. We propose a general mechanism for adaptive computing that enables one to make any purelyfunctional program adaptive. We show
Specializing Shaders
 In Computer Graphics Proceedings, Annual Conference Series
, 1995
"... ing with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc., fax + 1 (212) 8690481, or . 1 Specializing Shaders Brian Gue ..."
Abstract

Cited by 49 (0 self)
 Add to MetaCart
ing with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc., fax + 1 (212) 8690481, or <permissions@acm.org>. 1 Specializing Shaders Brian Guenter, Todd B. Knoblock, Erik Ruf * Microsoft Research Abstract We have developed a system for interactive manipulation of shading parameters for three dimensional rendering. The system takes as input userdefined shaders, written in a subset of C, which are then specialized for interactive use. Since users typically experiment with different values of a single shader parameter while leaving the others constant, we can benefit by automatically generating a specialized shader that performs only those computations depending on the parameter being varied; all other values needed by the shader can be precomputed and cached. The specialized shaders are as much as 95 times faster than the origi...
Static caching for incremental computation
 ACM Trans. Program. Lang. Syst
, 1998
"... A systematic approach is given for deriving incremental programs that exploit caching. The cacheandprune method presented in the article consists of three stages: (I) the original program is extended to cache the results of all its intermediate subcomputations as well as the nal result, (II) the e ..."
Abstract

Cited by 47 (19 self)
 Add to MetaCart
A systematic approach is given for deriving incremental programs that exploit caching. The cacheandprune method presented in the article consists of three stages: (I) the original program is extended to cache the results of all its intermediate subcomputations as well as the nal result, (II) the extended program is incrementalized so that computation on a new input can use all intermediate results on an old input, and (III) unused results cached by the extended program and maintained by the incremental program are pruned away, l e a ving a pruned extended program that caches only useful intermediate results and a pruned incremental program that uses and maintains only the useful results. All three stages utilize static analyses and semanticspreserving transformations. Stages I and III are simple, clean, and fully automatable. The overall method has a kind of optimality with respect to the techniques used in Stage II. The method can be applied straightforwardly to provide a systematic approach to program improvement via caching.
Selective Memoization
"... We present a framework for applying memoization selectively. The framework provides programmer control over equality, space usage, and identification of precise dependences so that memoization can be applied according to the needs of an application. Two key properties of the framework are that it ..."
Abstract

Cited by 44 (19 self)
 Add to MetaCart
We present a framework for applying memoization selectively. The framework provides programmer control over equality, space usage, and identification of precise dependences so that memoization can be applied according to the needs of an application. Two key properties of the framework are that it is efficient and yields programs whose performance can be analyzed using standard techniques. We describe the framework in the context of a functional language and an implementation as an SML library. The language is based on a modal type system and allows the programmer to express programs that reveal their true data dependences when executed. The SML implementation cannot support this modal type system statically, but instead employs runtime checks to ensure correct usage of primitives.
Deriving incremental programs
, 1993
"... A systematic approach i s g i v en for deriving incremental programs from nonincremental programs written in a standard functional programming language. We exploit a number of program analysis and transformation techniques and domainspeci c knowledge, centered around e ective utilization of cachin ..."
Abstract

Cited by 39 (21 self)
 Add to MetaCart
A systematic approach i s g i v en for deriving incremental programs from nonincremental programs written in a standard functional programming language. We exploit a number of program analysis and transformation techniques and domainspeci c knowledge, centered around e ective utilization of caching, in order to provide a degree of incrementality not otherwise achievable by a generic incremental evaluator. 1
An experimental analysis of selfadjusting computation
 In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI
, 2006
"... Selfadjusting computation uses a combination of dynamic dependence graphs and memoization to efficiently update the output of a program as the input changes incrementally or dynamically over time. Related work showed various theoretical results, indicating that the approach can be effective for a r ..."
Abstract

Cited by 35 (19 self)
 Add to MetaCart
Selfadjusting computation uses a combination of dynamic dependence graphs and memoization to efficiently update the output of a program as the input changes incrementally or dynamically over time. Related work showed various theoretical results, indicating that the approach can be effective for a reasonably broad range of applications. In this article, we describe algorithms and implementation techniques to realize selfadjusting computation and present an experimental evaluation of the proposed approach on a variety of applications, ranging from simple list primitives to more sophisticated computational geometry algorithms. The results of the experiments show that the approach is effective in practice, often offering orders of magnitude speedup from recomputing the output from scratch. We believe this is the first experimental evidence that incremental computation of any type is effective in practice for a reasonably broad set of applications.
SelfAdjusting Computation
 In ACM SIGPLAN Workshop on ML
, 2005
"... From the algorithmic perspective, we describe novel data structures for tracking the dependences ina computation and a changepropagation algorithm for adjusting computations to changes. We show that the overhead of our dependence tracking techniques is O(1). To determine the effectiveness of change ..."
Abstract

Cited by 35 (13 self)
 Add to MetaCart
From the algorithmic perspective, we describe novel data structures for tracking the dependences ina computation and a changepropagation algorithm for adjusting computations to changes. We show that the overhead of our dependence tracking techniques is O(1). To determine the effectiveness of changepropagation, we present an analysis technique, called trace stability, and apply it to a number of applications.
Imperative selfadjusting computation
 In POPL ’08: Proceedings of the 35th annual ACM SIGPLANSIGACT symposium on Principles of programming languages
, 2008
"... Recent work on selfadjusting computation showed how to systematically write programs that respond efficiently to incremental changes in their inputs. The idea is to represent changeable data using modifiable references, i.e., a special data structure that keeps track of dependencies between read an ..."
Abstract

Cited by 27 (16 self)
 Add to MetaCart
Recent work on selfadjusting computation showed how to systematically write programs that respond efficiently to incremental changes in their inputs. The idea is to represent changeable data using modifiable references, i.e., a special data structure that keeps track of dependencies between read and writeoperations, and to let computations construct traces that later, after changes have occurred, can drive a change propagation algorithm. The approach has been shown to be effective for a variety of algorithmic problems, including some for which adhoc solutions had previously remained elusive. All previous work on selfadjusting computation, however, relied on a purely functional programming model. In this paper, we show that it is possible to remove this limitation and support modifiable references that can be written multiple times. We formalize this using a language AIL for which we define evaluation and changepropagation semantics. AIL closely resembles a traditional higherorder imperative programming language. For AIL we state and prove consistency, i.e., the property that although the semantics is inherently nondeterministic, different evaluation paths will still give observationally equivalent results. In the imperative setting where pointer graphs in the store can form cycles, our previous proof techniques do not apply. Instead, we make use of a novel form of a stepindexed logical relation that handles modifiable references. We show that AIL can be realized efficiently by describing implementation strategies whose overhead is provably constanttime per primitive. When the number of reads and writes per modifiable is bounded by a constant, we can show that change propagation becomes as efficient as it was in the pure case. The general case incurs a slowdown that is logarithmic in the maximum number of such operations. We use DFS and related algorithms on graphs as our running examples and prove that they respond to insertions and deletions of edges efficiently. 1.