Results 1  10
of
17
On Sound Compilation of Reals
"... Writing accurate numerical software is hard because of many sources of unavoidable uncertainties, including finite numerical precision of implementations. We present a programming model where the user writes a program in a realvalued implementation and specification language that explicitly include ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
Writing accurate numerical software is hard because of many sources of unavoidable uncertainties, including finite numerical precision of implementations. We present a programming model where the user writes a program in a realvalued implementation and specification language that explicitly includes different types of uncertainties. We then present a compilation algorithm that generates a conventional implementation that is guaranteed to meet the desired precision with respect to real numbers. Our verification step generates verification conditions that treat different uncertainties in a unified way and encode reasoning about floatingpoint roundoff errors into reasoning about real numbers. Such verification conditions can be used as a standardized format for verifying the precision and the correctness of numerical programs. Due to their often nonlinear nature, precise reasoning about such verification conditions remains difficult. We show that current stateofthe art SMT solvers do not scale well to solving such verification conditions. We propose a new procedure that combines exact SMT solving over reals with approximate and sound affine and interval arithmetic. We show that this approach overcomes scalability limitations of SMT solvers while providing improved precision over affine and interval arithmetic. Using our initial implementation we show the usefullness and effectiveness of our approach on several examples, including those containing nonlinear computation. 1.
A short survey of automated reasoning
"... Abstract. This paper surveys the field of automated reasoning, giving some historical background and outlining a few of the main current research themes. We particularly emphasize the points of contact and the contrasts with computer algebra. We finish with a discussion of the main applications so f ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. This paper surveys the field of automated reasoning, giving some historical background and outlining a few of the main current research themes. We particularly emphasize the points of contact and the contrasts with computer algebra. We finish with a discussion of the main applications so far. 1 Historical introduction The idea of reducing reasoning to mechanical calculation is an old dream [75]. Hobbes [55] made explicit the analogy in the slogan ‘Reason [...] is nothing but Reckoning’. This parallel was developed by Leibniz, who envisaged a ‘characteristica universalis’ (universal language) and a ‘calculus ratiocinator ’ (calculus of reasoning). His idea was that disputes of all kinds, not merely mathematical ones, could be settled if the parties translated their dispute into the characteristica and then simply calculated. Leibniz even made some steps towards realizing this lofty goal, but his work was largely forgotten. The characteristica universalis The dream of a truly universal language in Leibniz’s sense remains unrealized and probably unrealizable. But over the last few centuries a language that is at least adequate for
Stateless HOL
"... Dedicated to Roel de Vrijer, in the tradition of Automath. We present a version of the HOL Light system that supports undoing definitions in such a way that this does not compromise the soundness of the logic. In our system the code that keeps track of the constants that have been defined thus far h ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Dedicated to Roel de Vrijer, in the tradition of Automath. We present a version of the HOL Light system that supports undoing definitions in such a way that this does not compromise the soundness of the logic. In our system the code that keeps track of the constants that have been defined thus far has been moved out of the kernel. This means that the kernel now is purely functional. The changes to the system are small. All existing HOL Light developments can be run by the stateless system with only minor changes. The basic principle behind the system is not to name constants by strings, but by pairs consisting of a string and a definition. This means that the data structures for the terms are all merged into one big graph. OCaml – the implementation language of the system – can use pointer equality to establish equality of data structures fast. This allows the system to run at acceptable speeds. Our system runs at about 85 % of the speed of the stateful version of HOL Light.
Sound Auction Specification and Implementation
"... We introduce ‘formal methods ’ of mechanized reasoning from computer science to address two problems in auction design and practice: is a given auction design soundly specified, possessing its intended properties; and, is the design faithfully implemented when actually run? Failure on either front c ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We introduce ‘formal methods ’ of mechanized reasoning from computer science to address two problems in auction design and practice: is a given auction design soundly specified, possessing its intended properties; and, is the design faithfully implemented when actually run? Failure on either front can be hugely costly in large auctions. In the familiar setting of the combinatorial Vickrey auction, we use a mechanized reasoner, Isabelle, to first ensure that the auction has a set of desired properties (e.g. allocating all items at nonnegative prices), and to then generate verified executable code directly from the specified design. Having established the expected results in a known context, we intend next to use formal methods to verify new auction designs.
Stateless HOL Dedicated to Roel de Vrijer, in the tradition of Automath
"... Abstract. We present a version of the HOL Light system that supports undoing definitions in such a way that this does not compromise the soundness of the logic. In our system the code that keeps track of the constants that have been defined thus far has been moved out of the kernel. This means that ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We present a version of the HOL Light system that supports undoing definitions in such a way that this does not compromise the soundness of the logic. In our system the code that keeps track of the constants that have been defined thus far has been moved out of the kernel. This means that the kernel now is purely functional. The changes to the system are small. All existing HOL Light developments can be run by the stateless system with only minor changes. The basic principle behind the system is not to name constants by strings, but by pairs consisting of a string and a definition. This means that the data structures for the terms are all merged into one big graph. OCaml – the implementation language of the system – can use pointer equality to establish equality of data structures fast. This allows the system to run at acceptable speeds. Our system is about 1 6 version of HOL Light. th slower than the stateful
Elimination of Square Roots and Divisions by Partial Inlining
"... Computing accurately with real numbers is always a challenge. This is particularly true in critical embedded systems since memory issues do not allow the use of dynamic data structures. This constraint imposes a finite representations of the real numbers, provoking uncertainties and rounding error ..."
Abstract
 Add to MetaCart
(Show Context)
Computing accurately with real numbers is always a challenge. This is particularly true in critical embedded systems since memory issues do not allow the use of dynamic data structures. This constraint imposes a finite representations of the real numbers, provoking uncertainties and rounding errors that might modify the actual behavior of a program from its ideal one. This article presents a solution to this problem with a program transformation that eliminates square roots and divisions in straight line programs without nested function calls. These two operations are the source of infinite sequences of digits in numerical representations, thus, eliminating these operations allows to compute exactly using for example a fixedpoint number representation with a sufficient number of bits. In order to avoid an explosion of the size of the produced code this transformation relies on a particular antiunification to realize a partial inlining of the variable and function definitions. This transformation targeting code for aeronautics certified in PVS, we want to prove the semantics preservation in this proof assistant. Thus we use both an OCaml implementation and the subtyping features of PVS to ensure the correctness of the transformation by defining a proofproducing (certifying) program transformation, providing a specific semantics preservation lemma for every definition in the transformed program.
On Numerical Error Propagation with Sensitivity
"... An emerging area of research is to automatically compute reasonably precise upper bounds on numerical errors including roundoffs. Previous approaches for this task are limited in their precision and scalability, especially in the presence of branches and loops. We argue that one reason for these lim ..."
Abstract
 Add to MetaCart
(Show Context)
An emerging area of research is to automatically compute reasonably precise upper bounds on numerical errors including roundoffs. Previous approaches for this task are limited in their precision and scalability, especially in the presence of branches and loops. We argue that one reason for these limitations is the focus of past approaches on approximating errors of individual reachable states. We propose instead a more relational and modular approach to analysis that characterizes analytically the input/output behavior of code fragments and reuses this characterization to reason about larger code fragments. We use the derivatives of the functions corresponding to program paths to capture a program’s sensitivity to input changes. To apply this approach for finiteprecision code, we decouple the computation of newly introduced roundoff errors from the amplification of existing errors. This enables us to precisely and efficiently account for propagation of errors through longrunning computation. Using this approach we implemented an analysis for programs containing nonlinear computation, conditionals, and loops. In the presence of loops our approach can find closedform symbolic invariants capturing upper bounds on numerical errors, even when the error grows with the number of iterations. We evaluate our system on a number of benchmarks from embedded systems and scientific computation, showing substantial improvements in precision and scalability over the state of the art. 1.
An Automatable Formal Semantics for IEEE754 FloatingPoint Arithmetic
"... Abstract—Automated reasoning tools often provide little or no support to reason accurately and efficiently about floatingpoint arithmetic. As a consequence, software verification systems that use these tools are unable to reason reliably about programs containing floatingpoint calculations or may ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Automated reasoning tools often provide little or no support to reason accurately and efficiently about floatingpoint arithmetic. As a consequence, software verification systems that use these tools are unable to reason reliably about programs containing floatingpoint calculations or may give unsound results. These deficiencies are in stark contrast to the increasing awareness that the improper use of floatingpoint arithmetic in programs can lead to unintuitive and harmful defects in software. To promote coordinated efforts towards building efficient and accurate floatingpoint reasoning engines, this paper presents a formalization of the IEEE754 standard for floatingpoint arithmetic as a theory in manysorted firstorder logic. Benefits include a standardized syntax and unambiguous semantics, allowing tool interoperability and sharing of benchmarks, and providing a basis for automated, formal analysis of programs that process floatingpoint data. I.
Rigorous Estimation of FloatingPoint Roundoff Errors with Symbolic Taylor Expansions
, 2015
"... Rigorous estimation of maximum floatingpoint roundoff errors is an important capability central to many formal verification tools. Unfortunately, available techniques for this task often provide overestimates. Also, there are no available rigorous approaches that handle transcendental functions. W ..."
Abstract
 Add to MetaCart
(Show Context)
Rigorous estimation of maximum floatingpoint roundoff errors is an important capability central to many formal verification tools. Unfortunately, available techniques for this task often provide overestimates. Also, there are no available rigorous approaches that handle transcendental functions. We have developed a new approach called Symbolic Taylor Expansions that avoids this difficulty, and implemented a new tool called FPTaylor embodying this approach. Key to our approach is the use of rigorous global optimization, instead of the more familiar interval arithmetic, affine arithmetic, and/or SMT solvers. In addition to providing far tighter upper bounds of roundoff error in a vast majority of cases, FPTaylor also emits analysis certificates in the form of HOL Light proofs. We release FPTaylor along with our benchmarks for evaluation.