Results 21 
25 of
25
Implementing Bags on a Shared Memory MIMDMachine
 Also University of Aachen
, 1992
"... Multisets (also called bags) are an interesting data structure for parallely implemented functional programming languages, since they do not force an unneeded restriction of the data flow and allow to exploit as much parallelism as possible. Most operations on multisets can be understood as special ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Multisets (also called bags) are an interesting data structure for parallely implemented functional programming languages, since they do not force an unneeded restriction of the data flow and allow to exploit as much parallelism as possible. Most operations on multisets can be understood as special cases of the socalled Gamma scheme [BL90]. In the present paper, we investigate efficient implementations of several instances of this Gamma scheme on MIMDmachines with shared memory. 1 Introduction The ubiquitous data structure in functional (and logic) programming language is the list. For a parallel implementation of such a language, however, lists have the drawback that they force a sequential access to their elements. Hence, it is only interesting to use lists, if the computation related to every element is large compared to the length of the list. Several researchers have investigated other applicative data structures which are more adequate for parallel implementations, among them ...
Discrete Pattern Matching Over Sequences And Interval Sets
, 1993
"... Finding matches, both exact and approximate, between a sequence of symbols A and a pattern P has long been an active area of research in algorithm design. Some of the more wellknown byproducts from that research are the diff program and grep family of programs. These problems form a subdomain of a ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Finding matches, both exact and approximate, between a sequence of symbols A and a pattern P has long been an active area of research in algorithm design. Some of the more wellknown byproducts from that research are the diff program and grep family of programs. These problems form a subdomain of a larger areas of problems called discrete pattern matching which has been developed recently to characterise the wide range of pattern matching problems. This dissertation presents new algorithms for discrete pattern matching over sequences and develops a new subdomain of problems called discrete pattern matching over interval sets. The problems and algorithms presented here are characterised by pattern matching over interval sets. The problems and al
Confluently Persistent Tries for Efficient Version Control
"... Abstract. We consider a datastructural problem motivated by version control of a hierarchical directory structure in a system like Subversion. The model is that directories and files can be moved and copied between two arbitrary versions in addition to being added or removed in an arbitrary version ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. We consider a datastructural problem motivated by version control of a hierarchical directory structure in a system like Subversion. The model is that directories and files can be moved and copied between two arbitrary versions in addition to being added or removed in an arbitrary version. Equivalently, we wish to maintain a confluently persistent trie (where internal nodes represent directories, leaves represent files, and edge labels represent path names), subject to copying a subtree between two arbitrary versions, adding a new child to an existing node, and deleting an existing subtree in an arbitrary version. Our first data structure represents an nnode degree ∆ trie with O(1) “fingers ” in each version while supporting finger movement (navigation) and modifications near the fingers (including subtree copy) in O(lg ∆) time and space per operation. This data structure is essentially a localitysensitive version of the standard practice—path copying— costing O(d lg ∆) time and space for modification of a node at depth d, which is expensive when performing many deep but nearby updates. Our second data structure supporting finger movement in O(lg ∆) time and no space, while modifications take O(lg n) time and space. This data structure is substantially faster for deep updates, i.e., unbalanced tries. Both of these data structures are functional, which is a stronger property than confluent persistence. Without this stronger property, we show how both data structures can be sped up to support movement in O(lg lg ∆), which is essentially optimal. Along the way, we present a general technique for global rebuilding of fully persistent data structures, which is nontrivial because amortization and persistence do not usually mix. In particular, this technique improves the best previous result for fully persistent arrays and obtains the first efficient fully persistent hash table. 1
Aarhus University
"... The flexibility of dynamically typed languages such as JavaScript, Python, Ruby, and Scheme comes at the cost of runtime type checks. Some of these checks can be eliminated via controlflow analysis. However, traditional controlflow analysis (CFA) is not ideal for this task as it ignores flowsens ..."
Abstract
 Add to MetaCart
The flexibility of dynamically typed languages such as JavaScript, Python, Ruby, and Scheme comes at the cost of runtime type checks. Some of these checks can be eliminated via controlflow analysis. However, traditional controlflow analysis (CFA) is not ideal for this task as it ignores flowsensitive information that can be gained from dynamic type predicates, such as JavaScript’s instanceof and Scheme’s pair?, and from typerestricted operators, such as Scheme’s car. Yet, adding flowsensitivity to a traditional CFA worsens the already significant compiletime cost of traditional CFA. This makes it unsuitable for use in justintime compilers. In response, we have developed a fast, flowsensitive typerecovery algorithm based on the lineartime, flowinsensitive sub0CFA. The algorithm has been implemented as an experimental optimization for the commercial Chez Scheme compiler, where it has proven to be effective, justifying the elimination of about 60 % of runtime type checks in a large set of benchmarks. The algorithm processes on average over 100,000 lines of code per second and scales well asymptotically, running in only O(n log n) time. We achieve this compiletime performance and scalability through a novel combination of data structures and algorithms.
. UNC is an Equal Opportunity/Aflirmati,·e Action Institution. Scheme Evolution and the Relational Algebra
, 1988
"... In this paper we discuss extensions to the conventional relational algebra to support both aspects of transaction time, evolution of a database's contents and evolution of a database's scheme. We define a relation's scheme to be the relation's temporal signature, a function mapping the relation's at ..."
Abstract
 Add to MetaCart
In this paper we discuss extensions to the conventional relational algebra to support both aspects of transaction time, evolution of a database's contents and evolution of a database's scheme. We define a relation's scheme to be the relation's temporal signature, a function mapping the relation's attribute names onto their value domains, and class, indicating the extent of support for time. We also introduce copunands to change a relation, now defined as a triple consisting of a sequence of classes, a sequence of signatures, and a sequence of states. A semantic type system is required to identify semantically incorrect expressions and to enforce consistency constraints among a relation's class, signature, and state following update. We show that these extensions are applicable, without change, to historical algebras that support valid time, yielding an algebraic language for the query and update of temporal databases. The additions preserve the useful properties of the conventional algebra. A database scheme describes the structure of the database; the contents of the: database must adhere to that structure [Date 1976, Ullman 1982]. Scheme evolution refers to c;hanges to the scheme of a database over time. Conventional databases allow only one scheme to he in force