Results 1  10
of
110
TENSOR RANK AND THE ILLPOSEDNESS OF THE BEST LOWRANK APPROXIMATION PROBLEM
"... There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, te ..."
Abstract

Cited by 75 (10 self)
 Add to MetaCart
There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rankr approximations. The phenomenon is much more widespread than one might suspect: examples of this failure can be constructed over a wide range of dimensions, orders and ranks, regardless of the choice of norm (or even Brègman divergence). Moreover, we show that in many instances these counterexamples have positive volume: they cannot be regarded as isolated phenomena. In one extreme case, we exhibit a tensor space in which no rank3 tensor has an optimal rank2 approximation. The notable exceptions to this misbehavior are rank1 tensors and order2 tensors (i.e. matrices). In a more positive spirit, we propose a natural way of overcoming the illposedness of the lowrank approximation problem, by using weak solutions when true solutions do not exist. For this to work, it is necessary to characterize the set of weak solutions, and we do this in the case of rank 2, order 3 (in arbitrary dimensions). In our work we emphasize the importance of closely studying concrete lowdimensional examples as a first step towards more general results. To this end, we present a detailed analysis of equivalence classes of 2 × 2 × 2 tensors, and we develop methods for extending results upwards to higher orders and dimensions. Finally, we link our work to existing studies of tensors from an algebraic geometric point of view. The rank of a tensor can in theory be given a semialgebraic description; in other words, can be determined by a system of polynomial inequalities. We study some of these polynomials in cases of interest to us; in particular we make extensive use of the hyperdeterminant ∆ on R 2×2×2.
Complexity of Bézout’s Theorem IV : Probability of Success, Extensions
 SIAM J. Numer. Anal
, 1996
"... � � � We estimate the probability that a given number of projective Newton steps applied to a linear homotopy of a system of n homogeneous polynomial equations in n +1 complex variables of fixed degrees will find all the roots of the system. We also extend the framework of our analysis to cover the ..."
Abstract

Cited by 60 (9 self)
 Add to MetaCart
� � � We estimate the probability that a given number of projective Newton steps applied to a linear homotopy of a system of n homogeneous polynomial equations in n +1 complex variables of fixed degrees will find all the roots of the system. We also extend the framework of our analysis to cover the classical implicit function theorem and revisit the condition number in this context. Further complexity theory is developed. 1. Introduction. 1A. Bezout’s Theorem Revisited. Let f: � n+1 → � n be a system of homogeneous polynomials f =(f1,...,fn), deg fi = di, i=1,...,n. The linear space of such f is denoted by H (d) where d = (d1,...,dn). Consider the
Algorithms for Intersecting Parametric and Algebraic Curves I: Simple Intersections
 ACM Transactions on Graphics
, 1995
"... : The problem of computing the intersection of parametric and algebraic curves arises in many applications of computer graphics and geometric and solid modeling. Previous algorithms are based on techniques from elimination theory or subdivision and iteration. The former is however, restricted to low ..."
Abstract

Cited by 59 (18 self)
 Add to MetaCart
: The problem of computing the intersection of parametric and algebraic curves arises in many applications of computer graphics and geometric and solid modeling. Previous algorithms are based on techniques from elimination theory or subdivision and iteration. The former is however, restricted to low degree curves. This is mainly due to issues of efficiency and numerical stability. In this paper we use elimination theory and express the resultant of the equations of intersection as a matrix determinant. The matrix itself rather than its symbolic determinant, a polynomial, is used as the representation. The problem of intersection is reduced to computing the eigenvalues and eigenvectors of a numeric matrix. The main advantage of this approach lies in its efficiency and robustness. Moreover, the numerical accuracy of these operations is well understood. For almost all cases we are able to compute accurate answers in 64 bit IEEE floating point arithmetic. Keywords: Intersection, curves, a...
Backward Error and Condition of Structured Linear Systems
 SIMAX
, 1992
"... Reports available from: ..."
A New Pivoting Strategy For Gaussian Elimination
, 1996
"... . This paper discusses a method for determining a good pivoting sequence for Gaussian elimination, based on an algorithm for solving assignment problems. The worst case complexity is O(n 3 ), while in practice O(n 2:25 ) operations are sufficient. 1991 MSC Classification: 65F05 (secondary 90B80 ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
. This paper discusses a method for determining a good pivoting sequence for Gaussian elimination, based on an algorithm for solving assignment problems. The worst case complexity is O(n 3 ), while in practice O(n 2:25 ) operations are sufficient. 1991 MSC Classification: 65F05 (secondary 90B80) Keywords: Gaussian elimination, scaled partial pivoting, Imatrix, dominant transversal, assignment problem, bipartite weighted matching. 1 Introduction For a system of linear equations Ax = b with a square nonsingular coefficient matrix A, the most important solution algorithm is the systematic elimination method of Gauss. The basic idea of Gaussian elimination is the factorization of A as the product LU of a lower triangular matrix L with ones on its diagonal and an upper triangular matrix U , the diagonal entries of which are called the pivot elements. In general, the numerical stability of triangularization is guaranteed only if the matrix A is symmetric positive definite, diagonal...
Symbolicnumeric sparse interpolation of multivariate polynomials
 In Proc. Ninth Rhine Workshop on Computer Algebra (RWCA’04), University of Nijmegen, the Netherlands (2004
, 2006
"... We consider the problem of sparse interpolation of an approximate multivariate blackbox polynomial in floatingpoint arithmetic. That is, both the inputs and outputs of the blackbox polynomial have some error, and all numbers are represented in standard, fixedprecision, floating point arithmetic. ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
We consider the problem of sparse interpolation of an approximate multivariate blackbox polynomial in floatingpoint arithmetic. That is, both the inputs and outputs of the blackbox polynomial have some error, and all numbers are represented in standard, fixedprecision, floating point arithmetic. By interpolating the black box evaluated at random primitive roots of unity, we give efficient and numerically robust solutions. We note the similarity between the exact BenOr/Tiwari sparse interpolation algorithm and the classical Prony’s method for interpolating a sum of exponential functions, and exploit the generalized eigenvalue reformulation of Prony’s method. We analyze the numerical stability of our algorithms and the sensitivity of the solutions, as well as the expected conditioning achieved through randomization. Finally, we demonstrate the effectiveness of our techniques in practice through numerical experiments and applications. 1.
Solving systems of linear equations on the CELL processor using Cholesky factorization
 Trans. Parallel Distrib. Syst
, 2007
"... pioneering solutions in processor architecture. At the same time it presents new challenges for the development of numerical algorithms. One is effective exploitation of the differential between the speed of single and double precision arithmetic; the other is efficient parallelization between the s ..."
Abstract

Cited by 31 (26 self)
 Add to MetaCart
pioneering solutions in processor architecture. At the same time it presents new challenges for the development of numerical algorithms. One is effective exploitation of the differential between the speed of single and double precision arithmetic; the other is efficient parallelization between the short vector SIMD cores. In this work, the first challenge is addressed by utilizing a mixedprecision algorithm for the solution of a dense symmetric positive definite system of linear equations, which delivers double precision accuracy, while performing the bulk of the work in single precision. The second challenge is approached by introducing much finer granularity of parallelization than has been used for other architectures and using a lightweight decentralized synchronization. The implementation of the computationally intensive sections gets within 90 percent of peak floating point performance, while the implementation of the memory intensive sections reaches within 90 percent of peak memory bandwidth. On a single CELL processor, the algorithm achieves over 170 Gflop/s when solving a symmetric positive definite system of linear equation in single precision and over 150 Gflop/s when delivering the result in double precision accuracy.
SelfValidated Numerical Methods and Applications
, 1997
"... erical methods. We apologize to the reader for the length and verbosity of these notes but, like Pascal, 1 we didn't have the time to make them shorter. 1 "Je n'ai fait celleci plus longue que parce que je n'ai pas eu le loisir de la faire plus courte." Blaise Pascal, Lettres Provinciales, XV ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
erical methods. We apologize to the reader for the length and verbosity of these notes but, like Pascal, 1 we didn't have the time to make them shorter. 1 "Je n'ai fait celleci plus longue que parce que je n'ai pas eu le loisir de la faire plus courte." Blaise Pascal, Lettres Provinciales, XVI (1657). i ii Acknowledgements We thank the Organizing Committee of the 21 st Brazilian Mathematics Colloquium for the opportunity to present this course. We wish to thank Jo~ao Comba, who helped implement a prototype affine arithmetic package in Modula3, and Marcus Vinicius Andrade, who helped debug the C version and wrote an implicit surface raytracer based on it. Ronald van Iwaarden contributed an independent implementation of AA, and investigated its performance on branchandbound global optimization algorithms. Douglas Priest and Helmut Jarausch provided code and advice for rounding mode control. W
Science, Computational Science and Computer Science: At a Crossroads
 Comm. ACM
, 1993
"... We describe computational science as an interdisciplinary approach to doing science on computers. Our purpose is to introduce computational science as a legitimate interest of computer scientists. We present a foundation for computational science based on the need to incorporate computation at the s ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
We describe computational science as an interdisciplinary approach to doing science on computers. Our purpose is to introduce computational science as a legitimate interest of computer scientists. We present a foundation for computational science based on the need to incorporate computation at the scientific level; i.e., computational aspects must be considered when a model is formulated. We next present some obstacles to computer scientists' participation in computational science, including a cultural bias in computer science that inhibits participation. Finally, we look at some areas of conventional computer science and indicate areas of mutual interest between computational science and computer science. Keywords: education, computational science. 1 What is Computational Science ? In December, 1991, the U. S. Congress passed the High Performance Computing and Communications Act, commonly known as the HPCC . This act focuses on several aspects of computing technology, but two have...