Results 1 - 10
of
15
The Soot framework for Java program analysis: a retrospective
"... Abstract—Soot is a successful framework for experimenting with compiler and software engineering techniques for Java programs. Researchers from around the world have implemented a wide range of research tools which build on Soot, and Soot has been widely used by students for both courses and thesis ..."
Abstract
-
Cited by 35 (10 self)
- Add to MetaCart
Abstract—Soot is a successful framework for experimenting with compiler and software engineering techniques for Java programs. Researchers from around the world have implemented a wide range of research tools which build on Soot, and Soot has been widely used by students for both courses and thesis research. In this paper, we describe relevant features of Soot, summarize its development process, and discuss useful features for future program analysis frameworks. I.
Eventually consistent transactions
- In ESOP
, 2012
"... Abstract. When distributed clients query or update shared data, eventual consistency can provide better availability than strong consistency models. However, programming and implementing such systems can be difficult unless we establish a reasonable consistency model, i.e. some minimal guarantees th ..."
Abstract
-
Cited by 24 (6 self)
- Add to MetaCart
Abstract. When distributed clients query or update shared data, eventual consistency can provide better availability than strong consistency models. However, programming and implementing such systems can be difficult unless we establish a reasonable consistency model, i.e. some minimal guarantees that programmers can understand and systems can provide effectively. To this end, we propose a novel consistency model based on eventually consistent transactions. Unlike serializable transactions, eventually consistent transactions are ordered by two order relations (visibility and arbitration) rather than a single order relation. To demonstrate that eventually consistent transactions can be effectively implemented, we establish a handful of simple operational rules for managing replicas, versions and updates, based on graphs called revision diagrams. We prove that these rules are sufficient to guarantee correct implementation of eventually consistent transactions. Finally, we present two operational models (single server and server pool) of systems that provide eventually consistent transactions. 1
Parallel Schedule Synthesis for Attribute Grammars
- In PPoPP 2013. PLDI+ECOOP Student Research Competition 2 nd Place
"... We examine how to synthesize a parallel schedule of structured traversals over trees. In our system, programs are declaratively specified as attribute grammars. Our synthesizer automatically, correctly, and quickly schedules the attribute grammar as a composition of parallel tree traversals. Our dow ..."
Abstract
-
Cited by 7 (2 self)
- Add to MetaCart
(Show Context)
We examine how to synthesize a parallel schedule of structured traversals over trees. In our system, programs are declaratively specified as attribute grammars. Our synthesizer automatically, correctly, and quickly schedules the attribute grammar as a composition of parallel tree traversals. Our downstream compiler optimizes for GPUs and multicore CPUs. We provide support for designing efficient schedules. First, we introduce a declarative language of schedules where programmers may constrain any part of the schedule and the synthesizer will complete and autotune the rest. Furthermore, the synthesizer answers debugging queries about how schedules may be completed. We evaluate our approach with two case studies. First, we created the first parallel schedule for a large fragment of CSS and report a 3X multicore speedup. Second, we created an interactive GPU-accelerated animation of over 100,000 nodes.
Type-Directed Automatic Incrementalization
"... Application data often changes slowly or incrementally over time. Since incremental changes to input often result in only small changes in output, it is often feasible to respond to such changes asymptotically more efficiently than by re-running the whole computation. Traditionally, realizing such a ..."
Abstract
-
Cited by 6 (3 self)
- Add to MetaCart
(Show Context)
Application data often changes slowly or incrementally over time. Since incremental changes to input often result in only small changes in output, it is often feasible to respond to such changes asymptotically more efficiently than by re-running the whole computation. Traditionally, realizing such asymptotic efficiency improvements requires designing problem-specific algorithms known as dynamic or incremental algorithms, which are often significantly more complicated than conventional algorithms to design, analyze, implement, and use. A long-standing open problem is to develop techniques that automatically transform conventional programs so that they correctly and efficiently respond to incremental changes. In this paper, we describe a significant step towards solving the problem of automatic incrementalization: a programming language and a compiler that can, given a few type annotations describing
The Design and Implementation of Clocked Variables in
"... This paper investigates the addition of Clocked Variables to the X10 Programming Language. Clocked Variables work well for primitives and objects with only primitive fields, but incur substantial performance penalties for more complex objects. We discuss ways to deal with these issues. 1. ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
This paper investigates the addition of Clocked Variables to the X10 Programming Language. Clocked Variables work well for primitives and objects with only primitive fields, but incur substantial performance penalties for more complex objects. We discuss ways to deal with these issues. 1.
Functional Programming for Dynamic and Large Data with Self-Adjusting Computation
"... Combining type theory, language design, and empirical work, we present techniques for computing with large and dynamically changing datasets. Based on lambda calculus, our techniques are suitable for expressing a diverse set of algorithms on large datasets and, via self-adjusting computation, enable ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Combining type theory, language design, and empirical work, we present techniques for computing with large and dynamically changing datasets. Based on lambda calculus, our techniques are suitable for expressing a diverse set of algorithms on large datasets and, via self-adjusting computation, enable computations to respond automatically to changes in their data. Compared to prior work, this work overcomes the main challenge of reducing the space usage of self-adjusting computation without disproportionately decreasing performance. To this end, we present a type system for precise dependency tracking that minimizes the time and space for storing dependency metadata. The type system eliminates an important assumption of prior work that can lead to recording of spurious dependencies. We give a new type-directed translation algorithm that generates correct self-adjusting programs without relying on this assumption. We then show a probabilistic chunking technique to further decrease space usage by controlling the fundamental space-time tradeoff in self-adjusting computation. We implement and evaluate these techniques, showing very promising results on challenging benchmarks and large graphs. 1.
Converting Data-Parallelism to Task-Parallelism by Rewrites -- Purely Functional Programs across Multiple GPUs
, 2015
"... High-level domain-specific languages for array processing on the GPU are increasingly common, but they typically only run on a single GPU. As computational power is distributed across more devices, languages must target multiple devices simultaneously. To this end, we present a compositional transla ..."
Abstract
- Add to MetaCart
High-level domain-specific languages for array processing on the GPU are increasingly common, but they typically only run on a single GPU. As computational power is distributed across more devices, languages must target multiple devices simultaneously. To this end, we present a compositional translation that fissions data-parallel programs in the Accelerate language, allowing subsequent compiler and runtime stages to map computations onto multiple devices for improved performance—even programs that begin as a single data-parallel kernel.
Incremental Runtime-generation of Optimisation Problems using RAG-controlled Rewriting
"... Abstract-In the era of Internet of Things, software systems need to interact with many physical entities and cope with new requirements at runtime. Self-adaptive systems aim to tackle those challenges, often representing their context with a runtime model enabling better reasoning capabilities. How ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract-In the era of Internet of Things, software systems need to interact with many physical entities and cope with new requirements at runtime. Self-adaptive systems aim to tackle those challenges, often representing their context with a runtime model enabling better reasoning capabilities. However, those models quickly grow in size and need to be updated frequently with small changes due to a high number of physical entities changing constantly. This situation threatens the efficacy of analyses on such models, as they lack an efficient management of those changes leading to unnecessary computation overhead. We propose applying scalable, incremental change management of runtime models in the presence of a complex model to text transformation. In this paper, we present and evaluate an example of code generation of integer linear programs. In our case study using synthesized models, we saved 35 -83% processing time compared to a non-incremental approach. Using our approach, future self-adaptive systems can handle and analyze large-scale runtime models, even if they change frequently.
iThreads: A Threading Library for Parallel Incremental Computation
"... Abstract Incremental computation strives for efficient successive runs of applications by re-executing only those parts of the computation that are affected by a given input change instead of recomputing everything from scratch. To realize these benefits automatically, we describe iThreads, a threa ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract Incremental computation strives for efficient successive runs of applications by re-executing only those parts of the computation that are affected by a given input change instead of recomputing everything from scratch. To realize these benefits automatically, we describe iThreads, a threading library for parallel incremental computation. iThreads supports unmodified shared-memory multithreaded programs: it can be used as a replacement for pthreads by a simple exchange of dynamically linked libraries, without even recompiling the application code. To enable such an interface, we designed algorithms and an implementation to operate at the compiled binary code level by leveraging MMU-assisted memory access tracking and process-based thread isolation. Our evaluation on a multicore platform using applications from the PARSEC and Phoenix benchmarks and two casestudies shows significant performance gains.
Categories and Subject Descriptors D.3.3 [Programming Languages]:Language
"... Many big data computations involve processing data that changes incrementallyordynamicallyovertime.Usingexistingtechniques, suchcomputationsquicklybecomeimpractical.Forexample,computing the frequency of words in the first ten thousand paragraphs of a publicly available Wikipedia data set in a stream ..."
Abstract
- Add to MetaCart
Many big data computations involve processing data that changes incrementallyordynamicallyovertime.Usingexistingtechniques, suchcomputationsquicklybecomeimpractical.Forexample,computing the frequency of words in the first ten thousand paragraphs of a publicly available Wikipedia data set in a streaming fashion using MapReduce can take as much as a full day. In this paper, we propose an approach based on self-adjusting computation that can dramatically improve the efficiency of such computations. As an example, we can perform the aforementioned streaming computationinjustacouple ofminutes.