Results 1  10
of
25
Motivations for an arbitrary precision interval arithmetic and the MPFI library
 Reliable Computing
, 2002
"... Nowadays, computations involve more and more operations and consequently errors. The limits of applicability of some numerical algorithms are now reached: for instance the theoretical stability of a dense matrix factorization (LU or QR) is ensured under the assumption that n 3 u < 1, where n is t ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
(Show Context)
Nowadays, computations involve more and more operations and consequently errors. The limits of applicability of some numerical algorithms are now reached: for instance the theoretical stability of a dense matrix factorization (LU or QR) is ensured under the assumption that n 3 u < 1, where n is the dimension of the matrix and u = 1 + − 1, with 1 + the smallest floatingpoint larger than 1; this means that n must be less than 200,000, which is almost reached by modern simulations. The numerical quality of solvers is now an issue, and not only their mathematical quality. Let us cite studies performed by the CEA (French Nuclear Agency) on the simulation of nuclear plant accidents and also softwares controlling and possibly correcting numerical programs, such as Cadna [10] or Cena [20]. Another approach consists in computing with certified enclosures, namely interval arithmetic [21, 2, 18]. The fundamental principle of this arithmetic consists in replacing every number by an interval enclosing it. For instance, π cannot be exactly represented using a binary or decimal arithmetic, but it
Theory of real computation according to EGC
 In Proceedings of the Dagstuhl Seminar on Reliable Implementation of Real Number Algorithms: Theory and Practice, Lecture Notes in Computer Science
, 2006
"... ..."
(Show Context)
RZ: A tool for bringing constructive and computable mathematics closer to programming practice
 CiE 2007: Computation and Logic in the Real World, volume 4497 of LNCS
, 2007
"... Abstract. Realizability theory can produce code interfaces for the data structure corresponding to a mathematical theory. Our tool, called RZ, serves as a bridge between constructive mathematics and programming by translating specifications in constructive logic into annotated interface code in Obje ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Realizability theory can produce code interfaces for the data structure corresponding to a mathematical theory. Our tool, called RZ, serves as a bridge between constructive mathematics and programming by translating specifications in constructive logic into annotated interface code in Objective Caml. The system supports a rich input language allowing descriptions of complex mathematical structures. RZ does not extract code from proofs, but allows any implementation method, from handwritten code to code extracted from proofs by other tools. 1
On the Stability of Fast Polynomial Arithmetic
"... Operations on univariate dense polynomials—multiplication, division with remainder, multipoint evaluation—constitute central primitives entering as buildup blocks into many higher applications and algorithms. Fast Fourier Transform permits to accelerate them from naive quadratic to running time O(n ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Operations on univariate dense polynomials—multiplication, division with remainder, multipoint evaluation—constitute central primitives entering as buildup blocks into many higher applications and algorithms. Fast Fourier Transform permits to accelerate them from naive quadratic to running time O(n · polylogn), that is softly linear in the degree n of the input. This is routinely employed in complexity theoretic considerations and, over integers and finite fields, in practical number theoretic calculations. The present work explores the benefit of fast polynomial arithmetic over the field of real numbers where the precision of approximation becomes crucial. To this end, we study the computability of the above operations in the sense of Recursive Analysis as an effective refinement of continuity. This theoretical worstcase stability analysis is then complemented by an empirical evaluation: We use GMP and the iRRAM to find the precision required for the intermediate calculations in order to achieve a desired output accuracy. 1
The Design of Core 2: A Library for Exact Numeric Computation in Geometry and Algebra
 In Third International Congress on Mathematical Software, volume 6327, Kobe, Japon
, 2010
"... Abstract. There is a growing interest in numericalgebraic techniques in the computer algebra community as such techniques can speed up many applications. This paper is concerned with one such approach called Exact Numeric Computation (ENC). The ENC approach to algebraic number computation is based ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. There is a growing interest in numericalgebraic techniques in the computer algebra community as such techniques can speed up many applications. This paper is concerned with one such approach called Exact Numeric Computation (ENC). The ENC approach to algebraic number computation is based on iterative verified approximations, combined with constructive zero bounds. This paper describesCore 2, the latest version of the Core Library, a package designed for applications such as nonlinear computational geometry. The adaptive complexity of ENC combined with filters makes such libraries practical. Core 2 smoothly integrates our algebraic ENC subsystem with transcendental functions with εaccurate comparisons. This paper describes how the design of Core 2 addresses key software issues such as modularity, extensibility, and efficiency in a setting that combines algebraic and transcendental elements. Our redesign preserves the original goals of the Core Library, namely, to provide a simple and natural interface for ENC computation to support rapid prototyping and exploration. We present examples, experimental results, and timings for our new system, released as Core Library 2.0. 1
M.: Time complexity and convergence analysis of domain theoretic picard method. Extended Version available from http://wwwusers.aston.ac.uk/ ˜farjudia/AuxFiles/2008Picard.pdf
, 2008
"... Abstract. We present an implementation of the domaintheoretic Picard method for solving initial value problems (IVPs) introduced by Edalat and Pattinson [1]. Compared to Edalat and Pattinson’s implementation, our algorithm uses a more efficient arithmetic based on an arbitrary precision floatingpo ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We present an implementation of the domaintheoretic Picard method for solving initial value problems (IVPs) introduced by Edalat and Pattinson [1]. Compared to Edalat and Pattinson’s implementation, our algorithm uses a more efficient arithmetic based on an arbitrary precision floatingpoint library. Despite the additional overestimations due to floatingpoint rounding, we obtain a similar bound on the convergence rate of the produced approximations. Moreover, our convergence analysis is detailed enough to allow a static optimisation in the growth of the precision used in successive Picard iterations. Such optimisation greatly improves the efficiency of the solving process. Although a similar optimisation could be performed dynamically without our analysis, a static one gives us a significant advantage: we are able to predict the time it will take the solver to obtain an approximation of a certain (arbitrarily high) quality. 1
Guaranteed Precision for Transcendental and Algebraic Computation made Easy
, 2006
"... ..."
(Show Context)
Semantics of QueryDriven Communication of Exact Values1
"... Abstract: We address the question of how to communicate among distributed processes values such as real numbers, continuous functions and geometrical solids with arbitrary precision, yet efficiently. We extend the established concept of lazy communication using streams of approximants by introducin ..."
Abstract
 Add to MetaCart
Abstract: We address the question of how to communicate among distributed processes values such as real numbers, continuous functions and geometrical solids with arbitrary precision, yet efficiently. We extend the established concept of lazy communication using streams of approximants by introducing explicit queries. We formalise this approach using protocols of a queryanswer nature. Such protocols enable processes to provide valid approximations with certain accuracy and focusing on certain locality as demanded by the receiving processes through queries. A latticetheoretic denotational semantics of channel and process behaviour is developed. The query space is modelled as a continuous lattice in which the top element denotes the query demanding all the information, whereas other elements denote queries demanding partial and/or local information. Answers are interpreted as elements of lattices constructed over suitable domains of approximations to the exact objects. An unanswered query is treated as an error and denoted using the top element. The major novel characteristic of our semantic model is that it reflects the dependency of answers on queries. This enables the definition and analysis of an appropriate concept of convergence rate, by assigning an effort indicator to each query and a measure of information content to each answer. Thus we capture not only what function a process computes, but also how a process transforms the convergence rates from its inputs to its outputs. In future work these indicators can be used to capture further computational complexity measures. A robust prototype implementation of our model is available.
Function Interval Arithmetic
"... Abstract. We propose an arithmetic of function intervals as a basis for convenient rigorous numerical computation. Function intervals can be used as mathematical objects in their own right or as enclosures of functions over the reals. We present two areas of application of function interval arithm ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We propose an arithmetic of function intervals as a basis for convenient rigorous numerical computation. Function intervals can be used as mathematical objects in their own right or as enclosures of functions over the reals. We present two areas of application of function interval arithmetic and associated software that implements the arithmetic: (1) Validated ordinary differential equation solving using the AERN library and within the Acumen hybrid system modeling tool. (2) Numerical theorem proving using the PolyPaver prover.
www.informatik2011.de The Trouble with Real Numbers (Invited Paper)
"... Abstract: Comprehensive analytical modeling and simulation of cyberphysical systems is an integral part of the process that brings to life novel designs and products. But the effort needed to go from analytical models to running simulation code can impede or derail this process. Our thesis is that ..."
Abstract
 Add to MetaCart
Abstract: Comprehensive analytical modeling and simulation of cyberphysical systems is an integral part of the process that brings to life novel designs and products. But the effort needed to go from analytical models to running simulation code can impede or derail this process. Our thesis is that this process is amenable to automation, and that automating it will accelerate the pace of innovation. This paper reviews some basic concepts that we found interesting or thought provoking, and articulates some questions that may help prove or disprove this thesis. While based on ideas drawn from different disciplines outside programming languages, all these observations and questions pertain to how we need to reason and compute with real numbers. 1