Results 1  10
of
71
MPFR: A multipleprecision binary floatingpoint library with correct rounding
 ACM Trans. Math. Softw
, 2007
"... This paper presents a multipleprecision binary floatingpoint library, written in the ISO C language, and based on the GNU MP library. Its particularity is to extend to arbitraryprecision ideas from the IEEE 754 standard, by providing correct rounding and exceptions. We demonstrate how these stron ..."
Abstract

Cited by 70 (14 self)
 Add to MetaCart
This paper presents a multipleprecision binary floatingpoint library, written in the ISO C language, and based on the GNU MP library. Its particularity is to extend to arbitraryprecision ideas from the IEEE 754 standard, by providing correct rounding and exceptions. We demonstrate how these strong semantics are achieved — with no significant slowdown with respect to other arbitraryprecision tools — and discuss a few applications where such a library can be useful. Categories and Subject Descriptors: D.3.0 [Programming Languages]: General—Standards; G.1.0 [Numerical Analysis]: General—computer arithmetic, multiple precision arithmetic; G.1.2 [Numerical Analysis]: Approximation—elementary and special function approximation; G 4 [Mathematics of Computing]: Mathematical Software—algorithm design, efficiency, portability
A nearly lineartime approximation scheme for the Euclidean kmedian problem
, 1999
"... In the kmedian problem we are given a set N of n points in a metric space and a positive integer k: The objective is to locate k medians among the points so that the sum of the distances from each point in N to its closest median is minimized. The kmedian problem is a wellstudied, NPhard, bas ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
In the kmedian problem we are given a set N of n points in a metric space and a positive integer k: The objective is to locate k medians among the points so that the sum of the distances from each point in N to its closest median is minimized. The kmedian problem is a wellstudied, NPhard, basic clustering problem, which is closely related to facility location. Obtaining constantfactor approximations for this problem, even for the 2dimensional Euclidean metric, had long been an elusive goal. First Arora, Raghavan and Rao gave a randomized polynomialtime approximation scheme by extending techniques introduced originally by Arora for the Euclidean TSP. For any xed " > 0; their algorithm outputs a (1 + ")approximation in O(nkn log n) time.
FALCON: A MATLAB Interactive Restructuring Compiler
 IN LANGUAGES AND COMPILERS FOR PARALLEL COMPUTING
, 1995
"... The development of efficient numerical programs and library routines for highperformance parallel computers is a complex task requiring not only an understanding of the algorithms to be implemented, but also detailed knowledge of the target machine and the software environment. In this paper, w ..."
Abstract

Cited by 33 (10 self)
 Add to MetaCart
The development of efficient numerical programs and library routines for highperformance parallel computers is a complex task requiring not only an understanding of the algorithms to be implemented, but also detailed knowledge of the target machine and the software environment. In this paper, we describe a programming environment that can utilize such knowledge for the development of highperformance numerical programs and libraries. This environment uses an existing highlevel array language (MATLAB) as source language and performs static, dynamic, and interactive analysis to generate Fortran 90 programs with directives for parallelism. It includes capabilities for interactive and automatic transformations at both the operationlevel and the functional or algorithmlevel. Preliminary experiments, comparing interpreted MATLAB programs with their compiled versions, show that compiled programs can perform up to 48 times faster on a serial machine, and up to 140 times fas...
Uniform Random Generation of Decomposable Structures Using FloatingPoint Arithmetic
 THEORETICAL COMPUTER SCIENCE
, 1997
"... The recursive method formalized by Nijenhuis and Wilf [15] and systematized by Flajolet, Van Cutsem and Zimmermann [8], is extended here to floatingpoint arithmetic. The resulting ADZ method enables one to generate decomposable data structures  both labelled or unlabelled  uniformly at random, ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
The recursive method formalized by Nijenhuis and Wilf [15] and systematized by Flajolet, Van Cutsem and Zimmermann [8], is extended here to floatingpoint arithmetic. The resulting ADZ method enables one to generate decomposable data structures  both labelled or unlabelled  uniformly at random, in expected O(n 1+ffl ) time and space, after a preprocessing phase of O(n 2+ffl ) time, which reduces to O(n 1+ffl ) for contextfree grammars.
Multivariate Statistical Techniques for Parallel Performance Prediction
 IN PROC. 28TH HAWAII INT. CONF. ON SYSTEM SCIENCES, VOL. II, IEEE
, 1995
"... Performance prediction can play an important role in improving the efficiency of multicomputers in executing scalable parallel applications. An accurate model of program execution time must include detailed algorithmic and architectural characterizations. The exact values for critical model paramete ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
Performance prediction can play an important role in improving the efficiency of multicomputers in executing scalable parallel applications. An accurate model of program execution time must include detailed algorithmic and architectural characterizations. The exact values for critical model parameters such as message latency and cache miss penalty can often be difficult to determine. This research uses multivariate data analysis to estimate the values of these coefficients in an analytical model. Representing the coefficients as random variables with a specified mean and variance improves the utility of a performance model. Confidence intervals for predicted execution time can be generated using the standard error values for model parameters. Improvements in the model can also be made by investigating the cause of large variance values for a particular architecture.
Implementing NonLinear Constraints With Cooperative Solvers
, 1995
"... We investigate the use of cooperation between solvers in the scheme of constraint logic programming languages over the domain of nonlinear polynomial constraints. Instead of using a general and often inefficient decision procedure we propose a new approach for handling these constraints by cooperat ..."
Abstract

Cited by 23 (12 self)
 Add to MetaCart
We investigate the use of cooperation between solvers in the scheme of constraint logic programming languages over the domain of nonlinear polynomial constraints. Instead of using a general and often inefficient decision procedure we propose a new approach for handling these constraints by cooperating specialised solvers. Our approach requires the design of a client/server architecture to enable communication between the various components. The main modules are a linear solver, a nonlinear solver, a constraint manager, a communication protocol component and an answer processor module. This work is motivated by the need for a declarative system for robot motion planning and geometric problem solving. We have implemented a prototype called CoSAc (Constraint System Architecture) to validate our approach using cooperating solvers for nonlinear constraints over the real numbers. Our language is illustrated by an example that also shows the advantages of cooperation.
Automatic construction of accurate models of physical systems
 IN PROC . 8TH INTERNATIONAL WORKSHOP ON QUALITATIVE REASONING, NARA
, 1994
"... ..."
Computer algebra meets automated theorem proving: Integrating Maple and pvs
 Theorem Proving in Higher Order Logics (TPHOLs 2001), volume 2152 of LNCS
, 2001
"... ..."
Efficient Multiprecision Floating Point Multiplication with Exact Rounding
, 1993
"... An algorithm is described for multiplying multiprecision floating point numbers. The returned result is equal to the floating point number obtained by rounding the exact product. Software implementations of multiprecision floating point multiplication can reduce the computing time by a factor of two ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
An algorithm is described for multiplying multiprecision floating point numbers. The returned result is equal to the floating point number obtained by rounding the exact product. Software implementations of multiprecision floating point multiplication can reduce the computing time by a factor of two if they do not compute the low order digits of the product of the two mantissas. However, these algorithms do not necessarily provide exactly rounded results. The algorithm described in this paper is guaranteed to produce exactly rounded results and typically obtains the same savings. 1 Introduction We present an algorithm for multiplying multiprecision floating point numbers. The returned result is equal to the floating point number obtained by rounding the exact product. A rounding operation which satisfies this requirement is called exact rounding. Exact rounding provides a well defined, implementation independent semantics for floating point arithmetic. For this reason, floating point ...