Results 1 
8 of
8
A rational deconstruction of Landin’s SECD machine
 Implementation and Application of Functional Languages, 16th International Workshop, IFL’04, number 3474 in Lecture Notes in Computer Science
, 2004
"... Abstract. Landin’s SECD machine was the first abstract machine for applicative expressions, i.e., functional programs. Landin’s J operator was the first control operator for functional languages, and was specified by an extension of the SECD machine. We present a family of evaluation functions corre ..."
Abstract

Cited by 33 (20 self)
 Add to MetaCart
(Show Context)
Abstract. Landin’s SECD machine was the first abstract machine for applicative expressions, i.e., functional programs. Landin’s J operator was the first control operator for functional languages, and was specified by an extension of the SECD machine. We present a family of evaluation functions corresponding to this extension of the SECD machine, using a series of elementary transformations (transformation into continuationpassing style (CPS) and defunctionalization, chiefly) and their left inverses (transformation into direct style and refunctionalization). To this end, we modernize the SECD machine into a bisimilar one that operates in lockstep with the original one but that (1) does not use a data stack and (2) uses the callersave rather than the calleesave convention for environments. We also identify that the dump component of the SECD machine is managed in a calleesave way. The callersave counterpart of the modernized SECD machine precisely corresponds to Thielecke’s doublebarrelled continuations and to Felleisen’s encoding of J in terms of call/cc. We then variously characterize the J operator in terms of CPS and in terms of delimitedcontrol operators in the CPS hierarchy. As a byproduct, we also present several reduction semantics for applicative expressions
Parallelism in sequential functional languages
 PROC. OF THE INT. CONF. ON
, 1995
"... This paper formally studies the question of how much parallelism is available in calbyvalue functional languages with no parallel extensions (i. e., the functional sub;ets of ML or Scheme). In particular we are interested in placing bounds on how much parallelism is available for various problems. ..."
Abstract

Cited by 22 (11 self)
 Add to MetaCart
This paper formally studies the question of how much parallelism is available in calbyvalue functional languages with no parallel extensions (i. e., the functional sub;ets of ML or Scheme). In particular we are interested in placing bounds on how much parallelism is available for various problems. To do this we introduce a complexity model, the PAL, based on the callbyvalue Acalculus. The model is defined in terms of a profiling semantics and measures complexity in terms of the total work and the parallel depth of a computation. We describe a simulation of the APAL (the PAL extended with arithmetic operations) on various parallel machine models, including the butterfly, hypercube, and PRAM models and prove simulation bounds. In particular the simulat ions are work eficient (the processortime product on the machines is within a constant factor of the work on the APAL), and for p processors the slowdown (time on the machines divided by depth on the APAL) is proportional to at most O(log p). We also prove bounds for simulating the PRAM on the APAL. Based on the model, we describe and analyze treebased versions of quicksort and merge sort. We show that for an input of size n these algorithms run on the APAL model with O(rI log n) work and 0(log² n) depth (expected case for quicksort).
Implementing CCS, the LCS experiment
, 1989
"... machine is implemented with 8 registers and the stack. These registers are E (environment, a linked list), C (the cell packing the code), PC (program counter), R (resumption register, a stack height), P (process positions, encoded as pairs of values), L (process lines, encoded as linked lists), G (p ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
machine is implemented with 8 registers and the stack. These registers are E (environment, a linked list), C (the cell packing the code), PC (program counter), R (resumption register, a stack height), P (process positions, encoded as pairs of values), L (process lines, encoded as linked lists), G (positions accumulator, encoded as P) and Rp (ready processes, encoded as a queue of processes); the stack provided by the Format layer encodes both the S and D abstract registers. There are about 40 basic instructions, plus a number of derived instructions representing frequent sequences of the former, and about 70 primitives (logic, arithmetic, input/output, etc,).
A Parallel Complexity Model for Functional Languages
 IN: PROC. ACM CONF. ON FUNCTIONAL PROGRAMMING LANGUAGES AND COMPUTER ARCHITECTURE
, 1994
"... A complexity model based on the calculus with an appropriate operational semantics in presented and related to various parallel machine models, including the PRAM and hypercube models. The model is used to study parallel algorithms in the context of "sequential" functional languages, and ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
A complexity model based on the calculus with an appropriate operational semantics in presented and related to various parallel machine models, including the PRAM and hypercube models. The model is used to study parallel algorithms in the context of "sequential" functional languages, and to relate these results to algorithms designed directly for parallel machine models. For example, the paper shows that equally good upper bounds can be achieved for merging two sorted sequences in the pure calculus with some arithmetic constants as in the EREW PRAM, when they are both mapped onto a more realistic machine such as a hypercube or butterfly network. In particular for n keys and p processors, they both result in an O(n=p + log 2 p) time algorithm. These results argue that it is possible to get good parallelism in functional languages without adding explicitly parallel constructs. In fact, the lack of random access seems to be a bigger problem than the lack of parallelism. This research...
The Peter Landin prize
 HIGHERORDER SYMB COMPUT
, 2010
"... The Peter Landin prize honours the best paper presented at each year’s International Symposium on the Implementation and Application of Functional Languages (IFL). It has been awarded every year since 2003, and covers a range of topics including functional operating systems, static analysis for cost ..."
Abstract
 Add to MetaCart
The Peter Landin prize honours the best paper presented at each year’s International Symposium on the Implementation and Application of Functional Languages (IFL). It has been awarded every year since 2003, and covers a range of topics including functional operating systems, static analysis for cost information of functional programs, techniques to improve array processing for data locality and parallelism, explicit parallel coordination, supercompilation, and a rational deconstruction of Landin’s SECD machine itself. This article describes the history of the prize, explains why Peter Landin was chosen as nominee, and describes each of the articles that have been awarded the prize to date.
Accession For
, 1994
"... A complexity model based on the Acalculus with an appropriate operational semantics in presented and related to various parallel machine models, including the PRAM and hypercube models. The model is used to study parallel algorithms in the context of "sequential " functional languages, an ..."
Abstract
 Add to MetaCart
(Show Context)
A complexity model based on the Acalculus with an appropriate operational semantics in presented and related to various parallel machine models, including the PRAM and hypercube models. The model is used to study parallel algorithms in the context of "sequential " functional languages, and to relate these results to algorithms designed directly for parallel machine models. For example, the paper shows that equally good upper bounds can be achieved for merging two sorted sequences in the pure Acalculus with some arithmetic constants as in the EREW PRAM, when they are both mapped onto a more realistic machine such as a hypercube or butterfly network. In particular for n keys and p processors, they both result in an 0(n/p + log2 p) time algorithm. These results argue that it is possible to get good parallelism in functional languages without adding explicitly parallel constructs. In fact, the lack of random access seems to be a bigger problem than the lack of parallelism.