Results 1  10
of
29
Hoare Logic and VDM: MachineChecked Soundness and Completeness Proofs
, 1998
"... Investigating soundness and completeness of verification calculi for imperative programming languages is a challenging task. Many incorrect results have been published in the past. We take advantage of the computeraided proof tool LEGO to interactively establish soundness and completeness of both H ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
Investigating soundness and completeness of verification calculi for imperative programming languages is a challenging task. Many incorrect results have been published in the past. We take advantage of the computeraided proof tool LEGO to interactively establish soundness and completeness of both Hoare Logic and the operation decomposition rules of the Vienna Development Method (VDM) with respect to operational semantics. We deal with parameterless recursive procedures and local variables in the context of total correctness. As a case study, we use LEGO to verify the correctness of Quicksort in Hoare Logic. As our main contribution, we illuminate the rle of auxiliary variables in Hoare Logic. They are required to relate the value of program variables in the final state with the value of program variables in the initial state. In our formalisation, we reflect their purpose by interpreting assertions as relations on states and a domain of auxiliary variables. Furthermore, we propose a new structural rule for adjusting auxiliary variables when strengthening preconditions and weakening postconditions. This rule is stronger than all previously suggested structural rules, including rules of adaptation. With the new treatment, we are able to show that, contrary to common belief, Hoare Logic subsumes VDM in that every derivation in VDM can be naturally embedded in Hoare Logic. Moreover, we establish completeness results uniformly as corollaries of Most General Formula theorems which remove the need to reason about arbitrary assertions.
Incrementalization across object abstraction
 In OOPSLA ’05: Proceedings of the 20th annual ACM SIGPLAN conference on Object oriented programming, systems, languages, and applications
, 2005
"... Object abstraction supports the separation of what operations are provided by systems and components from how the operations are implemented, and is essential in enabling the construction of complex systems from components. Unfortunately, clear and modular implementations have poor performance when ..."
Abstract

Cited by 23 (14 self)
 Add to MetaCart
(Show Context)
Object abstraction supports the separation of what operations are provided by systems and components from how the operations are implemented, and is essential in enabling the construction of complex systems from components. Unfortunately, clear and modular implementations have poor performance when expensive query operations are repeated, while efficient implementations that incrementally maintain these query results are much more difficult to develop and to understand, because the code blows up significantly, and is no longer clear or modular. This paper describes a powerful and systematic method that first allows the “what ” of each component to be specified in a clear and modular fashion and implemented straightforwardly in an objectoriented language; then analyzes the queries and updates, across object abstraction, in the straightforward implementation; and finally derives the sophisticated and efficient “how ” of each component by incrementally maintaining the results of repeated expensive queries with respect to updates to their parameters. Our implementation and experimental results for example applications in query optimization, rolebased access control, etc. demonstrate the effectiveness and benefit of the method.
Discovering auxiliary information for incremental computation
 In Symp. on Princ. of Prog. Lang
, 1996
"... This paper presents program analyses and transformations that discover a general class of auxiliary information for any incremental computation problem. Combining these techniques with previous techniques for caching intermediate results, we obtain a systematic approach that transforms nonincrementa ..."
Abstract

Cited by 22 (12 self)
 Add to MetaCart
(Show Context)
This paper presents program analyses and transformations that discover a general class of auxiliary information for any incremental computation problem. Combining these techniques with previous techniques for caching intermediate results, we obtain a systematic approach that transforms nonincremental programs into e cient incremental programs that use and maintain useful auxiliary information as well as useful intermediate results. The use of auxiliary information allows us to achieve a greater degree of incrementality than otherwise possible. Applications of the approach i nclude strength reduction in optimizing compilers and nite di erencing in transformational programming. 1
Invariant Discovery via Failed Proof Attempts
 In Proc. LOPSTR '98, LNCS 1559
, 1998
"... . We present a framework for automating the discovery of loop invariants based upon failed proof attempts. The discovery of suitable loop invariants represents a bottleneck for automatic verification of imperative programs. Using the proof planning framework we reconstruct standard heuristics fo ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
. We present a framework for automating the discovery of loop invariants based upon failed proof attempts. The discovery of suitable loop invariants represents a bottleneck for automatic verification of imperative programs. Using the proof planning framework we reconstruct standard heuristics for developing invariants. We relate these heuristics to the analysis of failed proof attempts allowing us to discover invariants through a process of refinement. 1 Introduction Loop invariants are a well understood technique for specifying the behaviour of programs involving loops. The discovery of suitable invariants, however, is a major bottleneck for automatic verification of imperative programs. Early research in this area [18, 24] exploited both theorem proving techniques as well as domain specific heuristics. However, the potential for interaction between these components was not fully exploited. The proof planning framework, in which we reconstruct the standard heuristics, couples ...
Loop optimization for aggregate array computations
"... An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in signi cant redundancy ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
(Show Context)
An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in signi cant redundancy in the overall computation. This paper presents a method and algorithms that eliminate such overlapping aggregate array redundancies and shows both analytical and experimental performance improvements. The method is based on incrementalization, i.e., updating the values of aggregate array computations from iteration to iteration rather than computing them from scratch in each iteration. This involves maintaining additional information not maintained in the original program. We reduce various analysis problems to solving inequality constraints on loop variables and array subscripts, and we apply results from work on array data dependence analysis. Incrementalizing aggregate array computations produces drastic program speedup compared to previous optimizations. Previous methods for loop optimizations of arrays do not perform incrementalization, and previous techniques for loop incrementalization do not handle arrays.
Principled Strength Reduction
 Algorithmic Languages and Calculi
, 1996
"... This paper presents a principled approach for optimizing iterative (or recursive) programs. The approach formulates a loop body as a function f and a change operation \Phi, incrementalizes f with respect to \Phi, and adopts an incrementalized loop body to form a new loop that is more efficient. Thre ..."
Abstract

Cited by 11 (10 self)
 Add to MetaCart
(Show Context)
This paper presents a principled approach for optimizing iterative (or recursive) programs. The approach formulates a loop body as a function f and a change operation \Phi, incrementalizes f with respect to \Phi, and adopts an incrementalized loop body to form a new loop that is more efficient. Three general optimizations are performed as part of the adoption; they systematically handle initializations, termination conditions, and final return values on exits of loops. These optimizations are either omitted, or done in implicit, limited, or ad hoc ways in previous methods. The new approach generalizes classical loop optimization techniques, notably strength reduction, in optimizing compilers, and it unifies and systematizes various optimization strategies in transformational programming. Such principled strength reduction performs drastic program efficiency improvement via incrementalization and appreciably reduces code size via associated optimizations. We give examples where this app...
A Linear Time Algorithm for the k Maximal Sums Problem
"... Abstract. Finding the subvector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k subvectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n+k) time algorithm f ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Finding the subvector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k subvectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n+k) time algorithm for the k maximal sums problem. We use this algorithm to obtain algorithms solving the twodimensional k maximal sums problem in O(m 2 ·n+k) time, where the input is an m ×n matrix with m ≤ n. We generalize this algorithm to solve the ddimensional problem in O(n 2d−1 +k) time. The space usage of all the algorithms can be reduced to O(n d−1 + k). This leads to the first algorithm for the k maximal sums problem in one dimension using O(n + k) time and O(k) space. 1
Parallel Maximum Sum Algorithms on Interconnection Networks
 Queen’s Uni. Dept. of Com. and
, 1999
"... We develop parallel algorithms for both onedimensional and twodimensional versions of the maximum sum problem (or max sum for short) on several interconnection networks. These algorithms are all based on a simple scheme that uses prefix sums. To this end, we first show how to compute prefix sums o ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
We develop parallel algorithms for both onedimensional and twodimensional versions of the maximum sum problem (or max sum for short) on several interconnection networks. These algorithms are all based on a simple scheme that uses prefix sums. To this end, we first show how to compute prefix sums of N elements on a hypercube, a star, and a pancake interconnection network of size p (where p N) in optimal time of O( N p + log p). For the problem of maximum subsequence sum, the 1D version of the max sum problem, we find an algorithm that computes the maximum sum of N elements on the aforementioned networks of size p, all with a running time of O( N p + log p), which is optimal in view of the trivial\Omega\Gamma N p + log p) lower bound. When p = O( N log N ), our algorithm computes the max sum in O(log N) time, resulting in an optimal cost of O(N ). This result also matches the performance of two previous algorithms that are designed to run on PRAM. Our 1D max sum algorithm can...
Optimizing aggregate array computations in loops
 ACM Transactions on Programming Languages and Systems
, 2005
"... An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in significant redundancy ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
An aggregate array computation is a loop that computes accumulated quantities over array elements. Such computations are common in programs that use arrays, and the array elements involved in such computations often overlap, especially across iterations of loops, resulting in significant redundancy in the overall computation. This paper presents a method and algorithms that eliminate such overlapping aggregate array redundancies and shows analytical and experimental performance improvements. The method is based on incrementalization, i.e., updating the values of aggregate array computations from iteration to iteration rather than computing them from scratch in each iteration. This involves maintaining additional values not maintained in the original program. We reduce various analysis problems to solving inequality constraints on loop variables and array subscripts, and we apply results from work on array data dependence analysis. For aggregate array computations that have significant redundancy, incrementalization produces drastic speedup compared to previous optimizations; when there is little redundancy, the benefit might be offset by cache effects and other factors. Previous methods for loop optimizations of arrays do not perform incrementalization, and previous techniques for loop incrementalization do not handle arrays. 1
Stepwise Program Derivation
 AddisonWesley, International Series in Computer Science
, 1989
"... Our understanding of the program derivation process has evolved to the point where it can be described in terms of a clearly defined sequence of steps. In this paper, we will identify those steps and show how they may be used to derive programs from formal specifications. ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Our understanding of the program derivation process has evolved to the point where it can be described in terms of a clearly defined sequence of steps. In this paper, we will identify those steps and show how they may be used to derive programs from formal specifications.