Results 1 
6 of
6
EnerJ: Approximate Data Types for Safe and General LowPower Computation
"... Energy is increasingly a firstorder concern in computer systems. Exploiting energyaccuracy tradeoffs is an attractive choice in applications that can tolerate inaccuracies. Recent work has explored exposing this tradeoff in programming models. A key challenge, though, is how to isolate parts of ..."
Abstract

Cited by 26 (8 self)
 Add to MetaCart
Energy is increasingly a firstorder concern in computer systems. Exploiting energyaccuracy tradeoffs is an attractive choice in applications that can tolerate inaccuracies. Recent work has explored exposing this tradeoff in programming models. A key challenge, though, is how to isolate parts of the program that must be precise from those that can be approximated so that a program functions correctly even as quality of service degrades. We propose using type qualifiers to declare data that may be subject to approximate computation. Using these types, the system automatically maps approximate variables to lowpower storage, uses lowpower operations, and even applies more energyefficient algorithms provided by the programmer. In addition, the system can statically guarantee isolation of the precise program component from the approximate component. This allows a programmer to control explicitly how information flows from approximate data to precise data. Importantly, employing static analysis eliminates the need for dynamic checks, further improving energy savings. As a proof of concept, we develop EnerJ, an extension to Java that adds approximate data types. We also propose a hardware architecture that offers explicit approximate storage and computation. We port several applications to EnerJ and show that our extensions are expressive and effective; a small number of annotations lead to significant potential energy savings (10%–50%) at very little accuracy cost.
Managing performance vs. accuracy tradeoffs with loop perforation
 In SIGSOFT FSE
, 2011
"... Many modern computations (such as video and audio encoders, Monte Carlo simulations, and machine learning algorithms) are designed to trade off accuracy in return for increased performance. To date, such computations typically use adhoc, domainspecific techniques developed specifically for the com ..."
Abstract

Cited by 23 (13 self)
 Add to MetaCart
Many modern computations (such as video and audio encoders, Monte Carlo simulations, and machine learning algorithms) are designed to trade off accuracy in return for increased performance. To date, such computations typically use adhoc, domainspecific techniques developed specifically for the computation at hand. Loop perforation provides a general technique to trade accuracy for performance by transforming loops to execute a subset of their iterations. A criticality testing phase filters out critical loops (whose perforation produces unacceptable behavior) to identify tunable loops (whose perforation produces more efficient and still acceptably accurate computations). A perforation space exploration algorithm perforates combinations of tunable loops to find Paretooptimal perforation policies. Our results indicate that, for a range of applications, this approach typically delivers performance increases of over a factor of two (and up to a factor of seven) while changing the result that the application produces by less than 10%.
Probabilistically accurate program transformations
 In SAS
, 2011
"... Abstract. The standard approach to program transformation involves the use of discrete logical reasoning to prove that the transformation does not change the observable semantics of the program. We propose a new approach that, in contrast, uses probabilistic reasoning to justify the application of t ..."
Abstract

Cited by 18 (12 self)
 Add to MetaCart
Abstract. The standard approach to program transformation involves the use of discrete logical reasoning to prove that the transformation does not change the observable semantics of the program. We propose a new approach that, in contrast, uses probabilistic reasoning to justify the application of transformations that may change, within probabilistic accuracy bounds, the result that the program produces. Our new approach produces probabilistic guarantees of the form P(D  ≥ B) ≤ ɛ, ɛ ∈ (0, 1), where D is the difference between the results that the transformed and original programs produce, B is an acceptability bound on the absolute value of D, and ɛ is the maximum acceptable probability of observing large D. We show how to use our approach to justify the application of loop perforation (which transforms loops to execute fewer iterations) to a set of computational patterns. 1
Parallelizing sequential programs with statistical accuracy tests
"... We present QuickStep, a novel system for parallelizing sequential programs. Unlike standard parallelizing compilers (which are designed to preserve the semantics of the original sequential computation), QuickStep is instead designed to generate (potentially nondeterministic) parallel programs that p ..."
Abstract

Cited by 13 (11 self)
 Add to MetaCart
We present QuickStep, a novel system for parallelizing sequential programs. Unlike standard parallelizing compilers (which are designed to preserve the semantics of the original sequential computation), QuickStep is instead designed to generate (potentially nondeterministic) parallel programs that produce acceptably accurate results acceptably often. The freedom to generate parallel programs whose output may differ (within statistical accuracy bounds) from the output of the sequential program enables a dramatic simplification of the compiler, a dramatic increase in the range of applications that it can parallelize, and a significant expansion in the range of parallel programs that it can legally generate. Results from our benchmark set of applications show that QuickStep can automatically generate acceptably accurate and efficient parallel programs—the automatically generated parallel versions of five of our six benchmark applications run between 5.0 and 7.8 times faster on eight cores than the original sequential versions. These applications and parallelizations contain features (such as the use of modern objectoriented programming constructs or desirable parallelizations with infrequent but acceptable data races) that place them inherently beyond the reach of standard approaches.
Probabilistic Accuracy Bounds for Perforated Programs A New Foundation for Program Analysis and Transformation
"... Traditional program transformations operate under the onerous constraint that they must preserve the exact behavior of the transformed program. But many programs are designed to produce approximate results. Lossy video encoders, for example, are designed to give up perfect fidelity in return for fas ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Traditional program transformations operate under the onerous constraint that they must preserve the exact behavior of the transformed program. But many programs are designed to produce approximate results. Lossy video encoders, for example, are designed to give up perfect fidelity in return for faster encoding and smaller encoded videos [10]. Machine learning algorithms usually work with probabilistic models that capture some, but not all, aspects of phenomena that are difficult (if not impossible) to model with complete accuracy [2]. MonteCarlo computations use random simulation to deliver inherently approximate solutions to complex systems of equations that are, in many cases, computationally infeasible to solve exactly [5]. For programs that perform such computations, preserving the exact semantics simply misses the point. The underlying problems that these computations solve typically exhibit an inherent performance
Verifying Quantitative Reliability for Programs That Execute on Unreliable Hardware
"... Emerging highperformance architectures are anticipated to contain unreliable components that may exhibit soft errors, which silently corrupt the results of computations. Full detection and masking of soft errors is challenging, expensive, and, for some applications, unnecessary. For example, approx ..."
Abstract
 Add to MetaCart
Emerging highperformance architectures are anticipated to contain unreliable components that may exhibit soft errors, which silently corrupt the results of computations. Full detection and masking of soft errors is challenging, expensive, and, for some applications, unnecessary. For example, approximate computing applications (such as multimedia processing, machine learning, and big data analytics) can often naturally tolerate soft errors. We present Rely, a programming language that enables developers to reason about the quantitative reliability of an application – namely, the probability that it produces the correct result when executed on unreliable hardware. Rely allows developers to specify the reliability requirements for each value that a function produces. We present a static quantitative reliability analysis that verifies quantitative requirements on the reliability of an application, enabling a developer to perform sound and verified reliability engineering. The analysis takes a Rely program with a reliability specification and a hardware specification that characterizes the reliability of the underlying hardware components and verifies that the program satisfies its reliability specification when executed on the underlying unreliable hardware platform. We demonstrate the application of quantitative reliability analysis on six computations implemented