Results 1  10
of
52
An Extended Set of Fortran Basic Linear Algebra Subprograms
 ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE
, 1986
"... This paper describes an extension to the set of Basic Linear Algebra Subprograms. The extensions are targeted at matrixvector operations which should provide for efficient and portable implementations of algorithms for high performance computers. ..."
Abstract

Cited by 523 (68 self)
 Add to MetaCart
This paper describes an extension to the set of Basic Linear Algebra Subprograms. The extensions are targeted at matrixvector operations which should provide for efficient and portable implementations of algorithms for high performance computers.
Affine Arithmetic and its Applications to Computer Graphics
, 1993
"... We describe a new method for numeric computations, which we call affine arithmetic (AA). This model is similar to standard interval arithmetic, to the extent that it automatically keeps track of rounding and truncation errors for each computed value. However, by taking into account correlations betw ..."
Abstract

Cited by 79 (6 self)
 Add to MetaCart
We describe a new method for numeric computations, which we call affine arithmetic (AA). This model is similar to standard interval arithmetic, to the extent that it automatically keeps track of rounding and truncation errors for each computed value. However, by taking into account correlations between operands and subformulas, AA is able to provide much tighter bounds for the computed quantities, with errors that are approximately quadratic in the uncertainty of the input variables. We also describe two applications of AA to computer graphics problems, where this feature is particularly valuable: namely, ray tracing and the construction of octrees for implicit surfaces.
Homotopy hyperbolic 3manifolds are hyperbolic
 Ann. of Math
, 2003
"... This paper introduces a rigorous computerassisted procedure for analyzing hyperbolic 3manifolds. This procedure is used to complete the proof of several longstanding rigidity conjectures in 3manifold theory as well as to ..."
Abstract

Cited by 76 (5 self)
 Add to MetaCart
(Show Context)
This paper introduces a rigorous computerassisted procedure for analyzing hyperbolic 3manifolds. This procedure is used to complete the proof of several longstanding rigidity conjectures in 3manifold theory as well as to
Random Fibonacci sequences and the number 1.13198824...
 MATHEMATICS OF COMPUTATION
, 1999
"... For the familiar Fibonacci sequence (defined by f1 = f2 = 1, and fn = fn−1 + fn−2 for n>2), fn increases exponentially with n at a rate given by the golden ratio (1 + √ 5)/2 =1.61803398.... But for a simple modification with both additions and subtractions — the random Fibonacci sequences define ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
(Show Context)
For the familiar Fibonacci sequence (defined by f1 = f2 = 1, and fn = fn−1 + fn−2 for n>2), fn increases exponentially with n at a rate given by the golden ratio (1 + √ 5)/2 =1.61803398.... But for a simple modification with both additions and subtractions — the random Fibonacci sequences defined by t1 = t2 =1, and for n>2, tn = ±tn−1 ± tn−2, where each ± sign is independent and either + or − with probability 1/2 — it is not even obvious if tn  should increase with n. Our main result is that n tn  →1.13198824... as n →∞ with
The generalized triangular decomposition
 Mathematics of Computation
, 2006
"... Abstract. Given a complex matrix H, we consider the decomposition H = QRP ∗ , where R is upper triangular and Q and P have orthonormal columns. Special instances of this decomposition include (a) the singular value decomposition (SVD) where R is a diagonal matrix containing the singular values on th ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Given a complex matrix H, we consider the decomposition H = QRP ∗ , where R is upper triangular and Q and P have orthonormal columns. Special instances of this decomposition include (a) the singular value decomposition (SVD) where R is a diagonal matrix containing the singular values on the diagonal, (b) the Schur decomposition where R is an upper triangular matrix with the eigenvalues of H on the diagonal, (c) the geometric mean decomposition (GMD) [The Geometric Mean Decomposition, Y. Jiang, W. W. Hager, and J. Li, December 7, 2003] where the diagonal of R is the geometric mean of the positive singular values. We show that any diagonal for R can be achieved that satisfies Weyl’s multiplicative majorization conditions: k� k� ri  ≤ σi, 1 ≤ k < K, i=1 i=1 K� K� ri  = σi, where K is the rank of H, σi is the ith largest singular value of H, and ri is the ith largest (in magnitude) diagonal element of R. We call the decomposition H = QRP ∗ , where the diagonal of R satisfies Weyl’s conditions, the generalized triangular decomposition (GTD). The existence of the GTD is established using a result of Horn [On the eigenvalues of a matrix with prescribed singular values, Proc. Amer. Math. Soc., 5 (1954), pp. 4–7]. In addition, we present a direct (nonrecursive) algorithm that starts with the SVD and applies a series of permutations and Givens rotations to obtain the GTD. The GMD has application to signal processing and the design of multipleinput multipleoutput (MIMO) systems; the lossless filters Q and P minimize the maximum error rate of the network. The GTD is more flexible than the GMD since the diagonal elements of R need not be identical. With this additional freedom, the performance of a communication channel can be optimized, while taking into account differences in priority or differences in quality of service requirements for subchannels. Another application of the GTD is to inverse eigenvalue problems where the goal is to construct matrices with prescribed eigenvalues and singular values. Key words. Generalized triangular decomposition, geometric mean decomposition, matrix factorization, unitary factorization, singular value decomposition, Schur decomposition, MIMO systems, inverse eigenvalue problems
A distillation algorithm for floatingpoint summation
 SIAM J. Sci. Comput
, 1999
"... Abstract. The addition of two or more floatingpoint numbers is fundamental to numerical computations. This paper describes an efficient “distillation ” style algorithm which produces a precise sum by exploiting the natural accuracy of compensated cancellation. The algorithm is applicable to all set ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The addition of two or more floatingpoint numbers is fundamental to numerical computations. This paper describes an efficient “distillation ” style algorithm which produces a precise sum by exploiting the natural accuracy of compensated cancellation. The algorithm is applicable to all sets of data but is particularly appropriate for illconditioned data, where standard methods fail due to the accumulation of rounding error and its subsequent exposure by cancellation. The method uses only standard floatingpoint arithmetic and does not rely on the radix used by the arithmetic model, the architecture of specific machines, or the use of accumulators.
Automatic Linear Correction Of Rounding Errors
, 1999
"... A new automatic method to correct the firstorder effect of floating point rounding errors on the result of a numerical algorithm is presented. A correcting term and a confidence threshold are computed using automatic differentiation, computation of elementary rounding error and running error analys ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
A new automatic method to correct the firstorder effect of floating point rounding errors on the result of a numerical algorithm is presented. A correcting term and a confidence threshold are computed using automatic differentiation, computation of elementary rounding error and running error analysis. Algorithms for which the accuracy of the result is not affected by higher order terms are identified. The correction is applied to the final result or to sensitive intermediate results. The properties and the efficiency of the method are illustrated with a sample numerical example.
C.F.: A decimal floatingpoint specification
 Proceedings of the 15th IEEE Symposium on Computer Arithmetic
"... E v e n though decimal ar i thmet ic i s pervasive in financial and commercial transactions, computers are s td l implement ing almost all arithmetic calculations using binary ar i thmet ic. A s chip real estate becomes cheaper it i s becoming likely tha t more computer m a nufacturers will provi ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
E v e n though decimal ar i thmet ic i s pervasive in financial and commercial transactions, computers are s td l implement ing almost all arithmetic calculations using binary ar i thmet ic. A s chip real estate becomes cheaper it i s becoming likely tha t more computer m a nufacturers will provide processors wi th decimal arithme t i c engines. Programming languages and databases are expanding the decimal data types available whale there has been little change in the base hardware. As a result, each language and application i s defining a different ar i thmet ic and f e w have considered the e f iciency of hardware implementa t ions when sett ing requirements. In th is paper, we propose a decimal f o r m a t which mee t s the requirements of existing standards for decim a l arithmetic and as e f i c i en t f o r hardware implementa t ion. W e propose th i s specification in the hope tha t designers will consider providing decimal ar i thmet ic in fu ture microprocessors and tha t f u ture decimal software specifications will consider hardware efficiencies. 1.
Lyapunov Exponents From Random Fibonacci Sequences To The Lorenz Equations
 Department of Computer Science, Cornell University
, 1998
"... this paper (Mathematical Reviews:29 #648) with the words "This is a profound memoir." 9 will show in Chapter 3, there are simple algorithms for bounding the Lyapunov exponents in this setting. The advanced state of the theory for random matrix products is a peculiar situation because dete ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
this paper (Mathematical Reviews:29 #648) with the words "This is a profound memoir." 9 will show in Chapter 3, there are simple algorithms for bounding the Lyapunov exponents in this setting. The advanced state of the theory for random matrix products is a peculiar situation because deterministic matrix products that govern sensitive dependence on initial conditions are barely understood; it is as if the strong law of large numbers were well understood without a satisfactory theory of convergence of infinite series. The elements of the theory of random matrix products are carefully explained in the beautiful monograph by Bougerol [16]. The basic result about Lyapunov exponents, lim