Results 1  10
of
41
An Extended Set of Fortran Basic Linear Algebra Subprograms
 ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE
, 1986
"... This paper describes an extension to the set of Basic Linear Algebra Subprograms. The extensions are targeted at matrixvector operations which should provide for efficient and portable implementations of algorithms for high performance computers. ..."
Abstract

Cited by 447 (69 self)
 Add to MetaCart
This paper describes an extension to the set of Basic Linear Algebra Subprograms. The extensions are targeted at matrixvector operations which should provide for efficient and portable implementations of algorithms for high performance computers.
Affine Arithmetic and its Applications to Computer Graphics
, 1993
"... We describe a new method for numeric computations, which we call affine arithmetic (AA). This model is similar to standard interval arithmetic, to the extent that it automatically keeps track of rounding and truncation errors for each computed value. However, by taking into account correlations betw ..."
Abstract

Cited by 67 (6 self)
 Add to MetaCart
We describe a new method for numeric computations, which we call affine arithmetic (AA). This model is similar to standard interval arithmetic, to the extent that it automatically keeps track of rounding and truncation errors for each computed value. However, by taking into account correlations between operands and subformulas, AA is able to provide much tighter bounds for the computed quantities, with errors that are approximately quadratic in the uncertainty of the input variables. We also describe two applications of AA to computer graphics problems, where this feature is particularly valuable: namely, ray tracing and the construction of octrees for implicit surfaces.
Homotopy hyperbolic 3manifolds are hyperbolic
 Ann. of Math
, 2003
"... This paper introduces a rigorous computerassisted procedure for analyzing hyperbolic 3manifolds. This procedure is used to complete the proof of several longstanding rigidity conjectures in 3manifold theory as well as to ..."
Abstract

Cited by 55 (4 self)
 Add to MetaCart
This paper introduces a rigorous computerassisted procedure for analyzing hyperbolic 3manifolds. This procedure is used to complete the proof of several longstanding rigidity conjectures in 3manifold theory as well as to
Random Fibonacci sequences and the number 1.13198824...
 MATHEMATICS OF COMPUTATION
, 1999
"... For the familiar Fibonacci sequence (defined by f1 = f2 = 1, and fn = fn−1 + fn−2 for n>2), fn increases exponentially with n at a rate given by the golden ratio (1 + √ 5)/2 =1.61803398.... But for a simple modification with both additions and subtractions — the random Fibonacci sequences define ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
For the familiar Fibonacci sequence (defined by f1 = f2 = 1, and fn = fn−1 + fn−2 for n>2), fn increases exponentially with n at a rate given by the golden ratio (1 + √ 5)/2 =1.61803398.... But for a simple modification with both additions and subtractions — the random Fibonacci sequences defined by t1 = t2 =1, and for n>2, tn = ±tn−1 ± tn−2, where each ± sign is independent and either + or − with probability 1/2 — it is not even obvious if tn  should increase with n. Our main result is that n tn  →1.13198824... as n →∞ with
The generalized triangular decomposition
 Mathematics of Computation
, 2006
"... Abstract. Given a complex matrix H, we consider the decomposition H = QRP ∗ , where R is upper triangular and Q and P have orthonormal columns. Special instances of this decomposition include (a) the singular value decomposition (SVD) where R is a diagonal matrix containing the singular values on th ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
Abstract. Given a complex matrix H, we consider the decomposition H = QRP ∗ , where R is upper triangular and Q and P have orthonormal columns. Special instances of this decomposition include (a) the singular value decomposition (SVD) where R is a diagonal matrix containing the singular values on the diagonal, (b) the Schur decomposition where R is an upper triangular matrix with the eigenvalues of H on the diagonal, (c) the geometric mean decomposition (GMD) [The Geometric Mean Decomposition, Y. Jiang, W. W. Hager, and J. Li, December 7, 2003] where the diagonal of R is the geometric mean of the positive singular values. We show that any diagonal for R can be achieved that satisfies Weyl’s multiplicative majorization conditions: k� k� ri  ≤ σi, 1 ≤ k < K, i=1 i=1 K� K� ri  = σi, where K is the rank of H, σi is the ith largest singular value of H, and ri is the ith largest (in magnitude) diagonal element of R. We call the decomposition H = QRP ∗ , where the diagonal of R satisfies Weyl’s conditions, the generalized triangular decomposition (GTD). The existence of the GTD is established using a result of Horn [On the eigenvalues of a matrix with prescribed singular values, Proc. Amer. Math. Soc., 5 (1954), pp. 4–7]. In addition, we present a direct (nonrecursive) algorithm that starts with the SVD and applies a series of permutations and Givens rotations to obtain the GTD. The GMD has application to signal processing and the design of multipleinput multipleoutput (MIMO) systems; the lossless filters Q and P minimize the maximum error rate of the network. The GTD is more flexible than the GMD since the diagonal elements of R need not be identical. With this additional freedom, the performance of a communication channel can be optimized, while taking into account differences in priority or differences in quality of service requirements for subchannels. Another application of the GTD is to inverse eigenvalue problems where the goal is to construct matrices with prescribed eigenvalues and singular values. Key words. Generalized triangular decomposition, geometric mean decomposition, matrix factorization, unitary factorization, singular value decomposition, Schur decomposition, MIMO systems, inverse eigenvalue problems
The sensitivity of computational control problems
 IEEE Control Syst. Mag
, 2004
"... What factors contribute to the accurate and efficient numerical solution of problems in control systems analysis and design? Although numerical methods have been used for many centuries to solve problems in science and engineering, the importance of computation grew tremendously with the advent of d ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
What factors contribute to the accurate and efficient numerical solution of problems in control systems analysis and design? Although numerical methods have been used for many centuries to solve problems in science and engineering, the importance of computation grew tremendously with the advent of digital computers. It became immediately clear that many of the classical analytical and numerical methods and algorithms could not be implemented directly as computer codes, although they were well suited for hand computations. What was the reason? When doing computations by hand a person can choose the accuracy of each elementary calculation and then estimate, based on intuition and experience, its influence on the final result. In contrast, when computations are done automatically, intuitive error control is usually not possible and the effect of errors on the intermediate calculations must be estimated in a more systematic way. Due to this observation, starting
A distillation algorithm for floatingpoint summation
 SIAM J. Sci. Comput
, 1999
"... Abstract. The addition of two or more floatingpoint numbers is fundamental to numerical computations. This paper describes an efficient “distillation ” style algorithm which produces a precise sum by exploiting the natural accuracy of compensated cancellation. The algorithm is applicable to all set ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Abstract. The addition of two or more floatingpoint numbers is fundamental to numerical computations. This paper describes an efficient “distillation ” style algorithm which produces a precise sum by exploiting the natural accuracy of compensated cancellation. The algorithm is applicable to all sets of data but is particularly appropriate for illconditioned data, where standard methods fail due to the accumulation of rounding error and its subsequent exposure by cancellation. The method uses only standard floatingpoint arithmetic and does not rely on the radix used by the arithmetic model, the architecture of specific machines, or the use of accumulators.
Lyapunov Exponents From Random Fibonacci Sequences To The Lorenz Equations
 Department of Computer Science, Cornell University
, 1998
"... this paper (Mathematical Reviews:29 #648) with the words "This is a profound memoir." 9 will show in Chapter 3, there are simple algorithms for bounding the Lyapunov exponents in this setting. The advanced state of the theory for random matrix products is a peculiar situation because dete ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
this paper (Mathematical Reviews:29 #648) with the words "This is a profound memoir." 9 will show in Chapter 3, there are simple algorithms for bounding the Lyapunov exponents in this setting. The advanced state of the theory for random matrix products is a peculiar situation because deterministic matrix products that govern sensitive dependence on initial conditions are barely understood; it is as if the strong law of large numbers were well understood without a satisfactory theory of convergence of infinite series. The elements of the theory of random matrix products are carefully explained in the beautiful monograph by Bougerol [16]. The basic result about Lyapunov exponents, lim
Automatic Linear Correction Of Rounding Errors
, 1999
"... A new automatic method to correct the firstorder effect of floating point rounding errors on the result of a numerical algorithm is presented. A correcting term and a confidence threshold are computed using automatic differentiation, computation of elementary rounding error and running error analys ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
A new automatic method to correct the firstorder effect of floating point rounding errors on the result of a numerical algorithm is presented. A correcting term and a confidence threshold are computed using automatic differentiation, computation of elementary rounding error and running error analysis. Algorithms for which the accuracy of the result is not affected by higher order terms are identified. The correction is applied to the final result or to sensitive intermediate results. The properties and the efficiency of the method are illustrated with a sample numerical example.