Results 11  20
of
36
Garbage Collection and Other Optimizations
, 1987
"... Existing techniques for garbage collection and machine code optimizations can interfere with each other. The inability to fully optimize code in a garbagecollected system is a hidden cost of garbage collection. One solution to this problem is proposed; an inexpensive protocol that permits most opti ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
Existing techniques for garbage collection and machine code optimizations can interfere with each other. The inability to fully optimize code in a garbagecollected system is a hidden cost of garbage collection. One solution to this problem is proposed; an inexpensive protocol that permits most optimizations and garbage collection to coexist. A second approach to this problem and a separate problem in its own right is to reduce the need for garbage collection. This requires analysis of storage lifetime. Inferring storage lifetime is di#cult in a language with nested and recursive data structures, but it is precisely these languages in which garbage collection is most useful. An improved analysis for "storage containment" is described. Containment information can be represented in a directed graph. The derivation of this graph falls into a monotone dataflow analysis framework; in addition, the derivation has the ChurchRosser property. The graphs produced in the analysis of a valuea...
Automatic accurate stack space and heap space analysis for highlevel languages
, 2000
"... This paper describes a general approach for automatic and accurate space and spacebound analyses for highlevel languages, considering stack space, heap allocation and live heap space usage of programs. The approach is based on program analysis and transformations and is fully automatic. The analys ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
This paper describes a general approach for automatic and accurate space and spacebound analyses for highlevel languages, considering stack space, heap allocation and live heap space usage of programs. The approach is based on program analysis and transformations and is fully automatic. The analyses produce accurate upper bounds in the presence of partially known input structures. The analyses have been implemented, and experimental results con rm the accuracy. 1
RealTime Deques, Multihead Turing Machines, and Purely Functional Programming
 In Conference on Functional Programming Languages and Computer Architecture
, 1993
"... We answer the following question: Can a deque (double ended queue) be implemented in a purely functional language such that each push or pop operation on either end of a queue is accomplished in O(1) time in the worst case? The answer is yes, thus solving a problem posted by Gajewska and Tarjan [1 ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
We answer the following question: Can a deque (double ended queue) be implemented in a purely functional language such that each push or pop operation on either end of a queue is accomplished in O(1) time in the worst case? The answer is yes, thus solving a problem posted by Gajewska and Tarjan [14] and by Ponder, McGeer, and Ng [25], and refining results of Sarnak [26] and Hoogerwoord [18]. We term such a deque realtime, since its constant worstcase behavior might be useful in real time programs (assuming realtime garbage collection [3], etc.) Furthermore, we show that no restriction of the functional language is necessary, and that push and pop operations on previous versions of a deque can also be achieved in constant time. We present a purely functional implementation of real time deques and its complexity analysis. We then show that the implementation has some interesting implications, and can be used to give a realtime simulation of a multihead Turing machine in a purel...
The Naive Execution of Affine Recurrence Equations
 INTERNATIONAL CONFERENCE ON APPLICATIONSPECIFIC ARRAY PROCESSORS
, 1995
"... In recognition of the fundamental relation between regular arrays and systems of affine recurrence equations, the Alpha language was developed as the basis of a computer aided design methodology for regular array architectures. Alpha is used to initially specify algorithms at a very high algorith ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
In recognition of the fundamental relation between regular arrays and systems of affine recurrence equations, the Alpha language was developed as the basis of a computer aided design methodology for regular array architectures. Alpha is used to initially specify algorithms at a very high algorithmic level. Regular array architecures can then be derived from the algorithmic specification using a transformational approach supported by the Alpha environment. This design methodology guarantees the final design to be correct by construction, assuming the initial algorithm was correct. In this paper, we address the problem of validating an initial specification. We demonstrate a translation methodolody which compiles Alpha into the imperative sequential language C. The Ccode may then be compiled and executed to test the specification. We show how an Alpha program can be naively implemented by viewing it as a set of monolithic arrays and their filling functions, implemented usin...
L.: Staged Static Techniques to Efficiently Implement Array Copy Semantics in a MATLAB JIT Compiler
, 2010
"... Abstract. Matlab has gained widespread acceptance among scientists. Several dynamic aspects of the language contribute to its appeal, but also provide many challenges. One such problem is caused by the copy semantics of Matlab. ExistingMatlab systems rely on referencecounting schemes to create copi ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. Matlab has gained widespread acceptance among scientists. Several dynamic aspects of the language contribute to its appeal, but also provide many challenges. One such problem is caused by the copy semantics of Matlab. ExistingMatlab systems rely on referencecounting schemes to create copies only when a shared array representation is updated. This reduces array copies, but requires runtime checks. We present a staged static analysis approach to determine when copies are not required. The first stage uses two simple, intraprocedural analyses, while the second stage combines a forward necessary copy analysis with a backward copy placement analysis. Our approach eliminates unneeded array copies without requiring reference counting or frequent runtime checks. We have implemented our approach in the McVM JIT. Our results demonstrate that, for our benchmark set, there are significant overheads for both existing referencecounted and naive copyinsertion approaches, and that our staged approach is effective in avoiding unnecessary copies. 1
An Integrated Framework For HighLevel Synthesis Of SelfTimed Circuits
, 1992
"... Asynchronous/selftimed designs are beginning to attract attention as promising means of dealing with the complexity of modern VLSI technology. They are characterized by absence of global clocking and concurrency limited only by the data and control dependencies. They offer the advantages of simpler ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Asynchronous/selftimed designs are beginning to attract attention as promising means of dealing with the complexity of modern VLSI technology. They are characterized by absence of global clocking and concurrency limited only by the data and control dependencies. They offer the advantages of simpler timing, the absence of clock distribution related problems, composability, opportunities for incremental improvement, and robustness. This dissertation addresses the issues underlying automating specificationdriven design of selftimed circuits. An integrated collection of tools called the hopCP Design Environment is developed that facilitates the specification, simulation, analysis, and performancedirected synthesis of selftimed circuits. hopCP is a processoriented concurrent HDL for the specification of asynchronous systems. It is equipped with constructs for specifying communication, synchronization and concurrency explicitly. It supports multicast communication, a restricted form ...
Fully Persistent Arrays for Efficient Incremental Updates and Voluminous Reads
 4th European Symposium on Programming
, 1992
"... The array update problem in a purely functional language is the following: once an array is updated, both the original array and the newly updated one must be preserved to maintain referential transparency. We devise a very simple, fully persistent data structure to tackle this problem such that ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
The array update problem in a purely functional language is the following: once an array is updated, both the original array and the newly updated one must be preserved to maintain referential transparency. We devise a very simple, fully persistent data structure to tackle this problem such that ffl each incremental update costs O(1) worstcase time, ffl a voluminous sequence of r reads cost in total O(r) amortized time, and ffl the data structure use O(n + u) space, where n is the size of the array and u is the total number of updates. A sequence of r reads is voluminous if r is \Omega\Gamma n) and the sequence of arrays being read forms a path of length O(r) in the version tree. A voluminous sequence of reads may be mixed with updates without affecting either the performance of reads or updates. An immediate consequence of the above result is that if a functional program is singlethreaded, then the data structure provides a simple and efficient implementation of funct...
Ypnos: Declarative, Parallel Structured Grid Programming
 In Proc. of the 5th ACM SIGPLAN workshop on Declarative Aspects of Multicore Programming
, 2010
"... A fully automatic, compilerdriven approach to parallelisation can result in unpredictable time and space costs for compiled code. On the other hand, a fully manual approach to parallelisation can be long, tedious, prone to errors, hard to debug, and often architecturespecific. We present a declarat ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
A fully automatic, compilerdriven approach to parallelisation can result in unpredictable time and space costs for compiled code. On the other hand, a fully manual approach to parallelisation can be long, tedious, prone to errors, hard to debug, and often architecturespecific. We present a declarative domainspecific language, Ypnos, for expressing structured grid computations which encourages manual specification of causally sequential operations but then allows a simple, predictable, static analysis to generate optimised, parallel implementations. We introduce the language and provide some discussion on the theoretical aspects of the language semantics, particularly the structuring of computations around the category theoretic notion of a comonad. Categories and Subject Descriptors D [3]: 2—Applicative (functional) languages, Concurrent, distributed, and parallel languages, Specialised application languages; D [3]: 3—Concurrent programming structures General Terms
Program development through proof transformation
 CONTEMPORARY MATHEMATICS
, 1990
"... We present a methodology for deriving verified programs that combines theorem proving and proof transformation steps. It extends the paradigm employed in systems like NuPrl where a program is developed and verified through the proof of the specification in a constructive type theory. We illustrate ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We present a methodology for deriving verified programs that combines theorem proving and proof transformation steps. It extends the paradigm employed in systems like NuPrl where a program is developed and verified through the proof of the specification in a constructive type theory. We illustrate our methodology through an extended example  a derivation of Warshall's algorithm for graph reachability. We also outline how our framework supports the definition, implementation, and use of abstract data types.
Efficient Type Inference Using Monads
, 1991
"... Efficient type inference algorithms are based on graphrewriting techniques. Consequently, at first sight they seem unsuitable for functional language implementation. In fact, most compilers written in functional languages use substitutionbased algorithms, at a considerable cost in performance. In ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Efficient type inference algorithms are based on graphrewriting techniques. Consequently, at first sight they seem unsuitable for functional language implementation. In fact, most compilers written in functional languages use substitutionbased algorithms, at a considerable cost in performance. In this paper, we show how monads may be used to transform a substitutionbased inference algorithm into one using a graph representation. The resulting algorithm is faster than the corresponding substitutionbased algorithm. Monads are also used to develop a parallel substitution inference algorithm. Practical considerations limit the development of a parallel graphrewriting algorithm in a functional language. 1 Introduction Type inference accounts for much of the cost of compiling a functional program. Typically around half of the total space and time requirements of a compilation in the Glasgow prototype Haskell compiler, or the Chalmers LML compiler can be attributed directly to the costs o...