Results 1 -
7 of
7
Freeze After Writing Quasi-Deterministic Parallel Programming with LVars
"... Deterministic-by-construction parallel programming models offer programmers the promise of freedom from subtle, hard-toreproduce nondeterministic bugs in parallel code. A principled approach to deterministic-by-construction parallel programming with shared state is offered by LVars: shared memory lo ..."
Abstract
-
Cited by 7 (3 self)
- Add to MetaCart
(Show Context)
Deterministic-by-construction parallel programming models offer programmers the promise of freedom from subtle, hard-toreproduce nondeterministic bugs in parallel code. A principled approach to deterministic-by-construction parallel programming with shared state is offered by LVars: shared memory locations whose semantics are defined in terms of a user-specified lattice. Writes to an LVar take the least upper bound of the old and new values with respect to the lattice, while reads from an LVar can observe only that its contents have crossed a specified threshold in the lattice. Although it guarantees determinism, this interface is quite limited. We extend LVars in two ways. First, we add the ability to “freeze ” and then read the contents of an LVar directly. Second, we add the ability to attach callback functions to an LVar, allowing events to be triggered by writes to it. Together, callbacks and freezing enable an expressive and useful style of parallel programming. We prove that in a language where communication takes place through freezable LVars, programs are at worst quasi-deterministic: on every run, they either produce the same answer or raise an error. We demonstrate the viability of our approach by implementing a library for Haskell supporting a variety of LVar-based data structures, together with two case studies that illustrate the programming model and yield promising parallel speedup. 1.
RAY: Integrating Rx and Async for Direct-Style Reactive Streams
"... ABSTRACT Languages like F#, C#, and recently also Scala, provide "async" extensions which aim to make asynchronous programming easier by avoiding an inversion of control that is inherent in traditional callback-based programming models (for the purpose of this paper called the "Async ..."
Abstract
- Add to MetaCart
(Show Context)
ABSTRACT Languages like F#, C#, and recently also Scala, provide "async" extensions which aim to make asynchronous programming easier by avoiding an inversion of control that is inherent in traditional callback-based programming models (for the purpose of this paper called the "Async" model). This paper outlines a novel approach to integrate the Async model with observable streams of the Reactive Extensions model which is best-known from the .NET platform, and of which popular implementations exist for Java, Ruby, and other widespread languages. We outline the translation of "Reactive Async" programs to efficient state machines, in a way that generalizes the state machine translation of regular Async programs. Finally, we sketch a formalization of the Reactive Async model in terms of a small-step operational semantics.
Freeze After Writing Quasi-Deterministic Parallel Programming with LVars and Handlers
"... Deterministic-by-construction parallel programming models offer programmers the promise of freedom from subtle, hard-toreproduce nondeterministic bugs in parallel code. A principled approach to deterministic-by-construction parallel programming with shared state is offered by LVars: shared memory lo ..."
Abstract
- Add to MetaCart
(Show Context)
Deterministic-by-construction parallel programming models offer programmers the promise of freedom from subtle, hard-toreproduce nondeterministic bugs in parallel code. A principled approach to deterministic-by-construction parallel programming with shared state is offered by LVars: shared memory locations whose semantics are defined in terms of a user-specified lattice. Writes to an LVar take the least upper bound of the old and new values with respect to the lattice, while reads from an LVar can observe only that its contents have crossed a specified threshold in the lattice. Although it guarantees determinism, this interface is quite limited. We extend LVars in two ways. First, we add the ability to “freeze ” and then read the contents of an LVar directly. Second, we add the ability to attach callback functions to an LVar, allowing events to be triggered by writes to it. Together, callbacks and freezing enable an expressive and useful style of parallel programming. We prove that in a language where communication takes place through freezable LVars, programs are at worst quasi-deterministic: on every run, they either produce the same answer or raise an error. We demonstrate the viability of our approach by implementing a library for Haskell supporting a variety of LVar-based data structures, together with two case studies that illustrate the programming model and yield promising parallel speedup. 1.
Freeze After Writing Quasi-Deterministic Parallel Programming with LVars and Handlers
"... Deterministic-by-construction parallel programming models offer programmers the promise of freedom from subtle, hard-toreproduce nondeterministic bugs in parallel code. A principled approach to deterministic-by-construction parallel programming with shared state is offered by LVars: shared memory lo ..."
Abstract
- Add to MetaCart
(Show Context)
Deterministic-by-construction parallel programming models offer programmers the promise of freedom from subtle, hard-toreproduce nondeterministic bugs in parallel code. A principled approach to deterministic-by-construction parallel programming with shared state is offered by LVars: shared memory locations whose semantics are defined in terms of a user-specified lattice. Writes to an LVar take the least upper bound of the old and new values with respect to the lattice, while reads from an LVar can observe only that its contents have crossed a specified threshold in the lattice. Although it guarantees determinism, this interface is quite limited. We extend LVars in two ways. First, we add the ability to “freeze ” and then read the contents of an LVar directly. Second, we add the ability to attach callback functions to an LVar, allowing events to be triggered by writes to it. Together, callbacks and freezing enable an expressive and useful style of parallel programming. We prove that in a language where communication takes place through freezable LVars, programs are at worst quasi-deterministic: on every run, they either produce the same answer or raise an error. We demonstrate the viability of our approach by implementing a library for Haskell supporting a variety of LVar-based data structures, together with two case studies that illustrate the programming model and yield promising parallel speedup. 1.
Containers and Aggregates, Mutators and Isolates for Reactive Programming
"... Many programs have an inherently reactive nature imposed by the functional dependencies between their data and ex-ternal events. Classically, these dependencies are dealt with using callbacks. Reactive programming with first-class reac-tive values is a paradigm that aims to encode callback logic in ..."
Abstract
- Add to MetaCart
(Show Context)
Many programs have an inherently reactive nature imposed by the functional dependencies between their data and ex-ternal events. Classically, these dependencies are dealt with using callbacks. Reactive programming with first-class reac-tive values is a paradigm that aims to encode callback logic in declarative statements. Reactive values concisely define dependencies between singular data elements, but cannot ef-ficiently express dependencies in larger datasets. Orthogo-nally, embedding reactive values in a shared-memory con-currency model convolutes their semantics and requires syn-chronization. This paper presents a generic framework for reactive programming that extends first-class reactive val-ues with the concept of lazy reactive containers, backed by several concrete implementations. Our framework addresses concurrency by introducing reactive isolates. We show ex-amples that our programming model is efficient and conve-nient to use.
SnapQueue: Lock-Free Queue with Constant Time Snapshots
"... We introduce SnapQueues- concurrent, lock-free queues with a linearizable, lock-free global-state transition oper-ation. This transition operation can atomically switch be-tween arbitrary SnapQueue states, and is used by enqueue, dequeue, snapshot and concatenation operations. We show that implement ..."
Abstract
- Add to MetaCart
(Show Context)
We introduce SnapQueues- concurrent, lock-free queues with a linearizable, lock-free global-state transition oper-ation. This transition operation can atomically switch be-tween arbitrary SnapQueue states, and is used by enqueue, dequeue, snapshot and concatenation operations. We show that implementing these operations efficiently depends on the persistent data structure at the core of the SnapQueue. This immutable support data structure is an interchange-able kernel of the SnapQueue, and drives its performance characteristics. The design allows reasoning about concur-rent operation running time in a functional way, absent from concurrency considerations. We present a support data struc-ture that enables O(1) queue operations, O(1) snapshot and O(log n) atomic concurrent concatenation. We show that the SnapQueue enqueue operation achieves up to 25 % higher performance, while the dequeue operation has performance identical to standard lock-free concurrent queues.
Spores: A Type-Based Foundation for Closures in the Age of Concurrency and Distribution
"... Abstract. Functional programming (FP) is regularly touted as the way forward for bringing parallel, concurrent, and distributed programming to the mainstream. The popularity of the rationale behind this viewpoint (immutable data transformed by function application) has even lead to a number of objec ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. Functional programming (FP) is regularly touted as the way forward for bringing parallel, concurrent, and distributed programming to the mainstream. The popularity of the rationale behind this viewpoint (immutable data transformed by function application) has even lead to a number of object-oriented (OO) pro-gramming languages adopting functional features such as lambdas (functions) and thereby function closures. However, despite this established viewpoint of FP as an enabler, reliably distributing function closures over a network, or using them in concurrent environments nonetheless remains a challenge across FP and OO lan-guages. This paper takes a step towards more principled distributed and concurrent programming by introducing a new closure-like abstraction and type system, called spores, that can guarantee closures to be serializable, thread-safe, or even have gen-eral, custom user-defined properties. Crucially, our system is based on the principle of encoding type information corresponding to captured variables in the type of a spore. We prove our type system sound, implement our approach for Scala1, eval-uate its practicality through an small empirical study, and show the power of these guarantees through a case analysis of real-world distributed and concurrent frame-works that this safe foundation for migratable closures facilitates.