Results 1  10
of
11
Mixed abstractions for floatingpoint arithmetic
 In FMCAD
, 2009
"... Abstract—Floatingpoint arithmetic is essential for many embedded and safetycritical systems, such as in the avionics industry. Inaccuracies in floatingpoint calculations can cause subtle changes of the control flow, potentially leading to disastrous errors. In this paper, we present a simple and ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
Abstract—Floatingpoint arithmetic is essential for many embedded and safetycritical systems, such as in the avionics industry. Inaccuracies in floatingpoint calculations can cause subtle changes of the control flow, potentially leading to disastrous errors. In this paper, we present a simple and general, yet powerful framework for building abstractions from formulas, and instantiate this framework to a bitaccurate, sound and complete decision procedure for IEEEcompliant binary floatingpoint arithmetic. Our procedure benefits in practice from its ability to flexibly harness both over and underapproximations in the abstraction process. We demonstrate the potency of the procedure for the formal analysis of floatingpoint software. I.
Certifying the floatingpoint implementation of an elementary function using Gappa
 IEEE TRANSACTIONS ON COMPUTERS, 2010. 9 HTTP://DX.DOI.ORG/10.1145/1772954.1772987 10 HTTP://DX.DOI.ORG/10.1145/1838599.1838622 11 HTTP://SHEMESH.LARC.NASA.GOV/NFM2010/PAPERS/NFM2010_14_23.PDF 12 HTTP://DX.DOI.ORG/10.1007/9783642142031_11 13 HTTP://DX.
, 2011
"... High confidence in floatingpoint programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. This certification may require a timeconsuming proof fo ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
High confidence in floatingpoint programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. This certification may require a timeconsuming proof for each line of code, and it is usually broken by the smallest change to the code, e.g., for maintenance or optimization purpose. Certifying floatingpoint programs by hand is, therefore, very tedious and errorprone. The Gappa proof assistant is designed to make this task both easier and more secure, due to the following novel features: It automates the evaluation and propagation of rounding errors using interval arithmetic. Its input format is very close to the actual code to validate. It can be used incrementally to prove complex mathematical properties pertaining to the code. It generates a formal proof of the results, which can be checked independently by a lower level proof assistant like Coq. Yet it does not require any specific knowledge about automatic theorem proving, and thus, is accessible to a wide community. This paper demonstrates the practical use of this tool for a widely used class of floatingpoint programs: implementations of elementary functions in a mathematical library.
Formal specification of MPI 2.0: Case study in specifying a practical concurrent programming API
, 2009
"... We describe the first formal specification of a nontrivial subset of MPI, the dominant communication API in high performance computing. Engineering a formal specification for a nontrivial concurrency API requires the right combination of rigor, executability, and traceability, while also serving as ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
We describe the first formal specification of a nontrivial subset of MPI, the dominant communication API in high performance computing. Engineering a formal specification for a nontrivial concurrency API requires the right combination of rigor, executability, and traceability, while also serving as a smooth elaboration of a preexisting informal specification. It also requires the modularization of reusable specification components to keep the length of the specification in check. Longlived APIs such as MPI are not usually ‘textbook minimalistic ’ because they support a diverse array of applications, a diverse community of users, and have efficient implementations over decades of computing hardware. We choose the TLA+ notation to write our specifications, and describe how we organized the specification of around 200 of the 300 MPI 2.0 functions. We detail a handful of these functions in this paper, and assess our specification with respect to the aforementioned requirements. We close with a description of possible approaches that may help render the act of writing, understanding, and validating the specifications of concurrency APIs much more productive.
Faster floatingpoint square root for integer processors
"... Abstract — This paper presents some work in progress on fast and accurate floatingpoint arithmetic software for ST200based embedded systems. We show how to use some key architectural features to design codes that achieve correct roundingtonearest without sacrificing for efficiency. This is illus ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Abstract — This paper presents some work in progress on fast and accurate floatingpoint arithmetic software for ST200based embedded systems. We show how to use some key architectural features to design codes that achieve correct roundingtonearest without sacrificing for efficiency. This is illustrated with the square root function, whose implementation given here is faster by over 35 % than the previously best one for such systems. I.
Floatingpoint verification
 International Journal Of ManMachine Studies
, 1995
"... Abstract: This paper overviews the application of formal verification techniques to hardware in general, and to floatingpoint hardware in particular. A specific challenge is to connect the usual mathematical view of continuous arithmetic operations with the discrete world, in a credible and verifia ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract: This paper overviews the application of formal verification techniques to hardware in general, and to floatingpoint hardware in particular. A specific challenge is to connect the usual mathematical view of continuous arithmetic operations with the discrete world, in a credible and verifiable way.
Certifying floatingpoint implementations using Gappa
 ensl00200830, arXiv:0801.0523, École Normale Supérieure de Lyon, 2007, http:// prunel.ccsd.cnrs.fr/ensl00200830/. Miscellaneous
"... ..."
(Show Context)
Mutationbased Test Case Generation for Simulink Models
"... Abstract. The Matlab/Simulink language has become the standard formalism for modeling and implementing control software in areas like avionics, automotive, railway, and process automation. Such software is often safety critical, and bugs have potentially disastrous consequences for people and mater ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. The Matlab/Simulink language has become the standard formalism for modeling and implementing control software in areas like avionics, automotive, railway, and process automation. Such software is often safety critical, and bugs have potentially disastrous consequences for people and material involved. We define a verification methodology to assess the correctness of Simulink programs by means of automated testcase generation. In the style of fault and mutationbased testing, the coverage of a Simulink program by a test suite is defined in terms of the detection of injected faults. Using bounded model checking techniques, we are able to effectively and automatically compute test suites for given fault models. Several optimisations are discussed to make the approach practical for realistic Simulink programs and fault models, and to obtain accurate coverage measures. 1
Software Implementations of Division and Square Root Operations for Intel ® Itanium ® Processors
"... Division and square root are basic operations defined by the IEEE Standard 7541985 for Binary FloatingPoint Arithmetic [1], and are implemented in hardware in most modern processors. In recent years however, software implementations of these operations have become competitive. The first IEEEcorrec ..."
Abstract
 Add to MetaCart
(Show Context)
Division and square root are basic operations defined by the IEEE Standard 7541985 for Binary FloatingPoint Arithmetic [1], and are implemented in hardware in most modern processors. In recent years however, software implementations of these operations have become competitive. The first IEEEcorrect implementations in software of the division and square root operations in a mainstream processor appeared in the 1980s [2]. Since then, several major processor architectures adopted similar solutions for division and square root algorithms, including the Intel ® Itanium ® Processor Family (IPF). Since the first software algorithms for division and square root were designed and used, improved algorithms were found and complete correctness proofs were carried out. It is maybe possible to improve these algorithms even further. The present paper gives an overview of the IEEEcorrect division and square root algorithms for Itanium processors. As examples, a few algorithms for single precision are presented and properties used in proving their IEEE correctness are stated. NonIEEE variants, less accurate but faster, of the division, square root and also reciprocal and reciprocal square root operations are discussed. Finally, accuracy and performance numbers are given. The algorithms presented here are inlined by the Intel and other compilers for IPF, whenever division and square root operations are performed.
Contents lists available at ScienceDirect Simulation Modelling Practice and Theory
"... journal homepage: www.elsevier.com/locate/simpat Performance analysis of a thresholdbased discretetime queue ..."
Abstract
 Add to MetaCart
journal homepage: www.elsevier.com/locate/simpat Performance analysis of a thresholdbased discretetime queue