Results 11  20
of
484
MPFUN: A Portable High Performance Multiprecision Package
, 1990
"... The author has written a package of Fortran routines that perform a variety of arithmetic operations and transcendental functions on floating point numbers of arbitrarily high precision, including large integers. This package features (1) virtually universal portability, (2) high performance, especi ..."
Abstract

Cited by 48 (4 self)
 Add to MetaCart
The author has written a package of Fortran routines that perform a variety of arithmetic operations and transcendental functions on floating point numbers of arbitrarily high precision, including large integers. This package features (1) virtually universal portability, (2) high performance, especially on vector supercomputers, (3) advanced algorithms, including FFTbased multiplication and quadratically convergent algorithms for π and transcendental functions, and (4) extensive selfchecking and debug facilities that permit the package to be used as a rigorous system integrity test. Converting application programs to run with these routines is facilitated by an automatic translator program. This paper describes the routines in the package and includes discussion of the algorithms employed, the implementation techniques, performance results and some applications. Notable among the performance results is that this package runs up to 40 times faster than another widely used package on a RISC workstation, and it runs up to 400 times faster than the other package on a Cray supercomputer.
Global illumination using local linear density estimation
 Proceedings of SIGGRAPH 97
, 1997
"... This article presents the density estimation framework for generating viewindependent global illumination solutions. It works by probabilistically simulating the light flow in an environment with light particles that trace random walks originating at luminaires and then using statistical density es ..."
Abstract

Cited by 45 (6 self)
 Add to MetaCart
This article presents the density estimation framework for generating viewindependent global illumination solutions. It works by probabilistically simulating the light flow in an environment with light particles that trace random walks originating at luminaires and then using statistical density estimation techniques to reconstruct the lighting on each surface. By splitting the computation into separate transport and reconstruction stages, we gain many advantages including reduced memory usage, the ability to simulate nondiffuse transport, and natural parallelism. Solutions to several theoretical and practical difficulties in implementing this framework are also described. Light sources that vary spectrally and directionally are integrated into a spectral particle tracer using nonuniform rejection. A new local linear density estimation technique eliminates boundary bias and extends to arbitrary polygons. A mesh decimation algorithm with perceptual calibration is introduced to simplify the Gouraudshaded
Automated Modeling of Complex Systems to Answer Prediction Questions
 ARTIFICIAL INTELLIGENCE
, 1995
"... ..."
The Nuprl Open Logical Environment
, 2000
"... The Nuprl system is a framework for reasoning about mathematics and programming. Over the years its design has been substantially improved to meet the demands of largescale applications. Nuprl LPE, the newest release, features an open, distributed architecture centered around a flexible knowled ..."
Abstract

Cited by 44 (16 self)
 Add to MetaCart
The Nuprl system is a framework for reasoning about mathematics and programming. Over the years its design has been substantially improved to meet the demands of largescale applications. Nuprl LPE, the newest release, features an open, distributed architecture centered around a flexible knowledge base and supports the cooperation of independent formal tools. This paper gives a brief overview of the system and the objectives that are addressed by its new architecture.
Experimental Evaluation of Euler Sums
, 1993
"... In response to a letter from Goldbach, Euler considered sums of the form 1 X k=1 ` 1 + 1 2 m + \Delta \Delta \Delta + 1 k m ' (k + 1) \Gamman for positive integers m and n. Euler was able to give explicit values for certain of these sums in terms of the Riemann zeta function. In a recent ..."
Abstract

Cited by 41 (10 self)
 Add to MetaCart
In response to a letter from Goldbach, Euler considered sums of the form 1 X k=1 ` 1 + 1 2 m + \Delta \Delta \Delta + 1 k m ' (k + 1) \Gamman for positive integers m and n. Euler was able to give explicit values for certain of these sums in terms of the Riemann zeta function. In a recent companion paper, Euler's results were extended to a significantly larger class of sums of this type, including sums with alternating signs. This research was facilitated by numerical computations using an algorithm that can determine, with high confidence, whether or not a particular numerical value can be expressed as a rational linear combination of several given constants. The present paper presents the numerical techniques used in these computations and lists many of the experimental results that have been obtained.
Recursive implementation of the Gaussian filter
, 1995
"... In this paper we propose a recursive implementation of the Gaussian filter. This implementation yields an infinite impulse response filter that has six MADDs per dimension independent of the value of a in the Gaussian kernel. In contrast to the Deriche implementation (1987), the coefficients of our ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
In this paper we propose a recursive implementation of the Gaussian filter. This implementation yields an infinite impulse response filter that has six MADDs per dimension independent of the value of a in the Gaussian kernel. In contrast to the Deriche implementation (1987), the coefficients of our recursire filter have a simple, closedform solution for a desired value of the Gaussian sigma. Our implementation is, in general, faster than (1) an implementation based upon direct convolution with samples of a Gaussian, (2) repeated convolutions with a kernel such as the uniform filter, and (3) an FFT implementation of a Gaussian filter.
SEEKing the Truth about Ad Hoc Join Costs
 VLDB Journal
, 1993
"... In this paper, we reexamine the results of prior work on methods for computing ad hoc joins. We develop a detailed cost model for predicting join algorithm performance, and we use the model to develop cost formulas for the major ad hoc join methods found in the relational database literature. We s ..."
Abstract

Cited by 39 (4 self)
 Add to MetaCart
In this paper, we reexamine the results of prior work on methods for computing ad hoc joins. We develop a detailed cost model for predicting join algorithm performance, and we use the model to develop cost formulas for the major ad hoc join methods found in the relational database literature. We show that various pieces of "common wisdom" about join algorithm performance fail to hold up when analyzed carefully, and we use our detailed cost model to derive optimal buffer allocation schemes for each of the join methods examined here. We show that optimizing their buffer allocations can lead to large performance improvements, e.g., as much as a 400% improvement in some cases. We also validate our cost model's predictions by measuring an actual implementation of each join algorithm considered. The results of this work should be directly useful to implementors of relational query optimizers and query processing systems. 1 Introduction The join of two sets of tuples is a fundament...
Coverage Criteria for GUI Testing
 In Proceedings of the 8th European Software Engineering Conference (ESEC) and 9th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE9
, 2001
"... The widespread recognition of the usefulness of graphical user interfaces (GUIs) has established their importance as critical components of today's software. GUIs have characteristics different from traditional software, and conventional testing techniques do not apply directly to GUI software. This ..."
Abstract

Cited by 38 (10 self)
 Add to MetaCart
The widespread recognition of the usefulness of graphical user interfaces (GUIs) has established their importance as critical components of today's software. GUIs have characteristics different from traditional software, and conventional testing techniques do not apply directly to GUI software. This paper's focus is on coverage criteria for GUIs, an important tool in testing. We present new coverage criteria that may be employed to help determine whether a GUI has been adequately tested. These coverage criteria use events and event sequences to specify a measure of test adequacy. Since the total number of permutations of event sequences in any nontrivial GUI is extremely large, the GUI's hierarchical structure is exploited to identify the important event sequences to be tested. The GUI's hierarchy is decomposed into GUI components each of which is used as a basic unit of testing. A new representation of a GUI component, called an eventflow graph, identifies the interaction of events within a component and intracomponent criteria are used to evaluate the adequacy of tests on these events. The hierarchical relationship among components is represented by an integration tree and intercomponent coverage criteria are used to evaluate the adequacy of test sequences that cross components. Algorithms are described to construct eventflow graphs and an integration tree for a given GUI, and to evaluate the coverage of a given test suite with respect to the new coverage criteria. A case study illustrates an important correlation between eventbased coverage of a GUI and statement coverage of the software's underlying code. Partially supported by the Andrew Mellon Predoctoral Fellowship. Effective Aug 1, 2001: Department of Computer Science, University of Maryland. atif@cs.um...