Results 11  20
of
30
Highlevel Proofs of Mathematical Programs Using Automatic Differentiation, Simplification, and some Common Sense
"... One problem in applying elementary methods to prove correctness of interesting scientific programs is the large discrepancy in level of discourse between lowlevel proof methods and the logic of scientific calculation, especially that used in a complex numerical program. The justification of an algo ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
One problem in applying elementary methods to prove correctness of interesting scientific programs is the large discrepancy in level of discourse between lowlevel proof methods and the logic of scientific calculation, especially that used in a complex numerical program. The justification of an algorithm typically relies on algebra or analysis, but the correctness of the program requires that the arithmetic expressions are written correctly and that iterations converge to correct values in spite of truncation of infinite processes or series and the commission of numerical roundoff errors. We hope to help bridge this gap by showing how we can, in some cases, state a highlevel requirement and by using a computer algebra system (CAS) demonstrate that a program satisfies that requirement. A CAS can contribute program manipulation, partial evaluation, simplification or other algorithmic methods. A novelty here is that we add to the usual list of techniques algorithm differentiation, a method already widely used in different contexts (usually optimization), to those used already for program proofs. We sketch a proof of a numerical program to compute sine, and display a related approach to a version of a Bessel function algorithm for J0(x) based on a recurrence.
Digitisation, Representation and Formalisation Digital Libraries of Mathematics
"... Abstract. One of the main tasks of the mathematical knowledge management community must surely be to enhance access to mathematics on digital systems. In this paper we present a spectrum of approaches to solving the various problems inherent in this task, arguing that a variety of approaches is both ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. One of the main tasks of the mathematical knowledge management community must surely be to enhance access to mathematics on digital systems. In this paper we present a spectrum of approaches to solving the various problems inherent in this task, arguing that a variety of approaches is both necessary and useful. The main ideas presented are about the differences between digitised mathematics, digitally represented mathematics and formalised mathematics. Each has its part to play in managing mathematical information in a connected world. Digitised material is that which is embodied in a computer file, accessible and displayable locally or globally. Represented material is digital material in which there is some structure (usually syntactic in nature) which maps to the mathematics contained in the digitised information. Formalised material is that in which both the syntax and semantics of the represented material, is automatically accessible. Given the range of mathematical information to which access is desired, and the limited resources available for managing that information, we must ensure that these resources are applied to digitise, form representations of or formalise, existing and new mathematical information in such a way as to extract the most benefit from the least expenditure of resources. We also analyse some of the various social and legal issues which surround the practical tasks. 1
The Application of Formal Verification to SPW Designs
 In Proceedings Euromicro Symposium on Digital System Design, IEEE Computer
, 2003
"... The Signal Processing WorkSystem (SPW) of Cadence is an integrated framework for developing DSP and communications products. Formal verification is a complementary technique to simulation based on mathematical logic. The HOL system is an environment for interactive theorem proving in a higherorder ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The Signal Processing WorkSystem (SPW) of Cadence is an integrated framework for developing DSP and communications products. Formal verification is a complementary technique to simulation based on mathematical logic. The HOL system is an environment for interactive theorem proving in a higherorder logic. It has an open userextensible architecture which makes it suitable for providing proof support for embedded languages. In this paper, we propose an approach to model SPW descriptions at different abstraction levels in HOL based on the shallow embedding technique. This will enable the formal verification of SPW designs which in the past could only be verified partially using conventional simulation techniques. We illustrate this novel application through a simple case study of a Notch filter.
Stochastic Formal Methods for Rare Failure Events due to the Accumulation of Errors
, 2006
"... Abstract — This paper provides an accurate bound on the number of numeric operations (fixed or floating point) that can safely be performed before accuracy is lost based on the assumption that accumulated errors are uniformly distributed in ± 1 unit in 2 the last place. This work has important impli ..."
Abstract
 Add to MetaCart
Abstract — This paper provides an accurate bound on the number of numeric operations (fixed or floating point) that can safely be performed before accuracy is lost based on the assumption that accumulated errors are uniformly distributed in ± 1 unit in 2 the last place. This work has important implications for control systems with safetycritical software, as these systems are now running fast enough and long enough for their errors to impact on their functionality. Furthermore, worstcase analysis would blindly advise the replacement of existing systems that have been successfully running for years and that will continue running before software development practices evolve. We present here new theorems that we are currently validating with the PVS proof assistant. This theory will allow code analyzing tools to produce formal certificates of accurate behavior. FAA regulations for aircraft require that the probability of an error be below 10 −9 for a 10 hour flight [1]. Such a low failure rate is stretching the limits of generic calculations solely based on the standard deviation of random variables for the intermediate sums. We need many individual errors for the Central Limit Theorem approximation to be sufficiently accurate (distance well below 10 −9). The precise bound presented here enhances the number of bits of the result that can safely be regarded as correct. I.
Stochastic Formal Methods for Hybrid Systems
"... We provide a framework to bound the probability that accumulated errors were never above a given threshold on hybrid systems. Such systems are used for example to model an aircraft or a nuclear power plant on one side and its software on the other side. This report contains simple formulas based on ..."
Abstract
 Add to MetaCart
We provide a framework to bound the probability that accumulated errors were never above a given threshold on hybrid systems. Such systems are used for example to model an aircraft or a nuclear power plant on one side and its software on the other side. This report contains simple formulas based on Lévy’s and Markov’s inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of hybrid systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one against a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems. hal00107495, version 5 24 Feb 2009 1
approximation errors
, 2010
"... For purposes of actual evaluation, mathematical functions f are commonly replaced by approximation polynomials p. Examples include floatingpoint implementations of elementary functions, quadrature or more theoretical proof work involving transcendental functions. Replacing f by p induces a relative ..."
Abstract
 Add to MetaCart
For purposes of actual evaluation, mathematical functions f are commonly replaced by approximation polynomials p. Examples include floatingpoint implementations of elementary functions, quadrature or more theoretical proof work involving transcendental functions. Replacing f by p induces a relative error ε = p/f −1. In order to ensure the validity of the use of p instead of f, the maximum error, i.e. the supremum norm ‖ε‖ ∞ must be safely bounded above. Numerical algorithms for supremum norms are efficient but cannot offer the required safety. Previous validated approaches often require tedious manual intervention. If they are automated, they have several drawbacks, such as the lack of quality guarantees. In this article a novel, automated supremum norm algorithm with a priori quality is proposed. It focuses on the validation step and paves the way for formally certified supremum norms.
unknown title
"... Abstract On certain recently developed architectures, a numerical program may give different answers depending on the execution hardware and the compilation. Our goal is to formally prove properties about numerical programs that are true for multiple architectures and compilers. We propose an approa ..."
Abstract
 Add to MetaCart
Abstract On certain recently developed architectures, a numerical program may give different answers depending on the execution hardware and the compilation. Our goal is to formally prove properties about numerical programs that are true for multiple architectures and compilers. We propose an approach that states the rounding error of each floatingpoint computation whatever the environment and the compiler choices. This approach is implemented in the FramaC platform for static analysis of C code. Small case studies using this approach are entirely and automatically proved.
Author manuscript, published in "21st IEEE International Symposium on Computer Arithmetic (2013)" How to Compute the Area of a Triangle: a Formal Revisit Sylvie Boldo
, 2013
"... Abstract—Mathematical values are usually computed using wellknown mathematical formulas without thinking about their accuracy, which may turn awful with particular instances. This is the case for the computation of the area of a triangle. When the triangle is needlelike, the common formula has a v ..."
Abstract
 Add to MetaCart
Abstract—Mathematical values are usually computed using wellknown mathematical formulas without thinking about their accuracy, which may turn awful with particular instances. This is the case for the computation of the area of a triangle. When the triangle is needlelike, the common formula has a very poor accuracy. Kahan proposed in 1986 an algorithm he claimed correct within a few ulps. Goldberg took over this algorithm in 1991 and gave a precise error bound. This article presents a formal proof of this algorithm, an improvement of its error bound and new investigations in case of underflow. Index Terms—floatingpoint arithmetic, formal proof, Coq, triangle, underflow I.