Results 1  10
of
169
TENSOR RANK AND THE ILLPOSEDNESS OF THE BEST LOWRANK APPROXIMATION PROBLEM
"... There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, te ..."
Abstract

Cited by 181 (13 self)
 Add to MetaCart
There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rankr approximations. The phenomenon is much more widespread than one might suspect: examples of this failure can be constructed over a wide range of dimensions, orders and ranks, regardless of the choice of norm (or even Brègman divergence). Moreover, we show that in many instances these counterexamples have positive volume: they cannot be regarded as isolated phenomena. In one extreme case, we exhibit a tensor space in which no rank3 tensor has an optimal rank2 approximation. The notable exceptions to this misbehavior are rank1 tensors and order2 tensors (i.e. matrices). In a more positive spirit, we propose a natural way of overcoming the illposedness of the lowrank approximation problem, by using weak solutions when true solutions do not exist. For this to work, it is necessary to characterize the set of weak solutions, and we do this in the case of rank 2, order 3 (in arbitrary dimensions). In our work we emphasize the importance of closely studying concrete lowdimensional examples as a first step towards more general results. To this end, we present a detailed analysis of equivalence classes of 2 × 2 × 2 tensors, and we develop methods for extending results upwards to higher orders and dimensions. Finally, we link our work to existing studies of tensors from an algebraic geometric point of view. The rank of a tensor can in theory be given a semialgebraic description; in other words, can be determined by a system of polynomial inequalities. We study some of these polynomials in cases of interest to us; in particular we make extensive use of the hyperdeterminant ∆ on R 2×2×2.
Algorithms for Intersecting Parametric and Algebraic Curves I: Simple Intersections
 ACM Transactions on Graphics
, 1995
"... : The problem of computing the intersection of parametric and algebraic curves arises in many applications of computer graphics and geometric and solid modeling. Previous algorithms are based on techniques from elimination theory or subdivision and iteration. The former is however, restricted to low ..."
Abstract

Cited by 69 (19 self)
 Add to MetaCart
: The problem of computing the intersection of parametric and algebraic curves arises in many applications of computer graphics and geometric and solid modeling. Previous algorithms are based on techniques from elimination theory or subdivision and iteration. The former is however, restricted to low degree curves. This is mainly due to issues of efficiency and numerical stability. In this paper we use elimination theory and express the resultant of the equations of intersection as a matrix determinant. The matrix itself rather than its symbolic determinant, a polynomial, is used as the representation. The problem of intersection is reduced to computing the eigenvalues and eigenvectors of a numeric matrix. The main advantage of this approach lies in its efficiency and robustness. Moreover, the numerical accuracy of these operations is well understood. For almost all cases we are able to compute accurate answers in 64 bit IEEE floating point arithmetic. Keywords: Intersection, curves, a...
Complexity of Bézout’s Theorem IV : Probability of Success, Extensions
 SIAM J. Numer. Anal
, 1996
"... � � � We estimate the probability that a given number of projective Newton steps applied to a linear homotopy of a system of n homogeneous polynomial equations in n +1 complex variables of fixed degrees will find all the roots of the system. We also extend the framework of our analysis to cover the ..."
Abstract

Cited by 65 (10 self)
 Add to MetaCart
� � � We estimate the probability that a given number of projective Newton steps applied to a linear homotopy of a system of n homogeneous polynomial equations in n +1 complex variables of fixed degrees will find all the roots of the system. We also extend the framework of our analysis to cover the classical implicit function theorem and revisit the condition number in this context. Further complexity theory is developed. 1. Introduction. 1A. Bezout’s Theorem Revisited. Let f: � n+1 → � n be a system of homogeneous polynomials f =(f1,...,fn), deg fi = di, i=1,...,n. The linear space of such f is denoted by H (d) where d = (d1,...,dn). Consider the
Backward Error and Condition of Structured Linear Systems
 SIMAX
, 1992
"... Reports available from: ..."
(Show Context)
A New Pivoting Strategy For Gaussian Elimination
, 1996
"... . This paper discusses a method for determining a good pivoting sequence for Gaussian elimination, based on an algorithm for solving assignment problems. The worst case complexity is O(n 3 ), while in practice O(n 2:25 ) operations are sufficient. 1991 MSC Classification: 65F05 (secondary 90B80 ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
. This paper discusses a method for determining a good pivoting sequence for Gaussian elimination, based on an algorithm for solving assignment problems. The worst case complexity is O(n 3 ), while in practice O(n 2:25 ) operations are sufficient. 1991 MSC Classification: 65F05 (secondary 90B80) Keywords: Gaussian elimination, scaled partial pivoting, Imatrix, dominant transversal, assignment problem, bipartite weighted matching. 1 Introduction For a system of linear equations Ax = b with a square nonsingular coefficient matrix A, the most important solution algorithm is the systematic elimination method of Gauss. The basic idea of Gaussian elimination is the factorization of A as the product LU of a lower triangular matrix L with ones on its diagonal and an upper triangular matrix U , the diagonal entries of which are called the pivot elements. In general, the numerical stability of triangularization is guaranteed only if the matrix A is symmetric positive definite, diagonal...
The Numerical Reliability of Econometric Software
, 1999
"... Numerical software is central to our computerized society; it is used... to analyze future options for financial markets and the economy. It is essential that it be of high ..."
Abstract

Cited by 37 (10 self)
 Add to MetaCart
Numerical software is central to our computerized society; it is used... to analyze future options for financial markets and the economy. It is essential that it be of high
SelfValidated Numerical Methods and Applications
, 1997
"... erical methods. We apologize to the reader for the length and verbosity of these notes but, like Pascal, 1 we didn't have the time to make them shorter. 1 "Je n'ai fait celleci plus longue que parce que je n'ai pas eu le loisir de la faire plus courte." Blaise Pascal, ..."
Abstract

Cited by 37 (0 self)
 Add to MetaCart
erical methods. We apologize to the reader for the length and verbosity of these notes but, like Pascal, 1 we didn't have the time to make them shorter. 1 "Je n'ai fait celleci plus longue que parce que je n'ai pas eu le loisir de la faire plus courte." Blaise Pascal, Lettres Provinciales, XVI (1657). i ii Acknowledgements We thank the Organizing Committee of the 21 st Brazilian Mathematics Colloquium for the opportunity to present this course. We wish to thank Jo~ao Comba, who helped implement a prototype affine arithmetic package in Modula3, and Marcus Vinicius Andrade, who helped debug the C version and wrote an implicit surface raytracer based on it. Ronald van Iwaarden contributed an independent implementation of AA, and investigated its performance on branchandbound global optimization algorithms. Douglas Priest and Helmut Jarausch provided code and advice for rounding mode control. W
Solving systems of linear equations on the CELL processor using Cholesky factorization
 Trans. Parallel Distrib. Syst
, 2007
"... pioneering solutions in processor architecture. At the same time it presents new challenges for the development of numerical algorithms. One is effective exploitation of the differential between the speed of single and double precision arithmetic; the other is efficient parallelization between the s ..."
Abstract

Cited by 35 (27 self)
 Add to MetaCart
(Show Context)
pioneering solutions in processor architecture. At the same time it presents new challenges for the development of numerical algorithms. One is effective exploitation of the differential between the speed of single and double precision arithmetic; the other is efficient parallelization between the short vector SIMD cores. In this work, the first challenge is addressed by utilizing a mixedprecision algorithm for the solution of a dense symmetric positive definite system of linear equations, which delivers double precision accuracy, while performing the bulk of the work in single precision. The second challenge is approached by introducing much finer granularity of parallelization than has been used for other architectures and using a lightweight decentralized synchronization. The implementation of the computationally intensive sections gets within 90 percent of peak floating point performance, while the implementation of the memory intensive sections reaches within 90 percent of peak memory bandwidth. On a single CELL processor, the algorithm achieves over 170 Gflop/s when solving a symmetric positive definite system of linear equation in single precision and over 150 Gflop/s when delivering the result in double precision accuracy.
ADAPTIVE MULTIPRECISION PATH TRACKING
"... This article treats numerical methods for tracking an implicitly defined path. The numerical precision required to successfully track such a path is difficult to predict a priori, and indeed, it may change dramatically through the course of the path. In current practice, one must either choose a con ..."
Abstract

Cited by 33 (22 self)
 Add to MetaCart
(Show Context)
This article treats numerical methods for tracking an implicitly defined path. The numerical precision required to successfully track such a path is difficult to predict a priori, and indeed, it may change dramatically through the course of the path. In current practice, one must either choose a conservatively large numerical precision at the outset or rerun paths multiple times in successively higher precision until success is achieved. To avoid unnecessary computational cost, it would be preferable to adaptively adjust the precision as the tracking proceeds in response to the local conditioning of the path. We present an algorithm that can be set to either reactively adjust precision in response to step failure or proactively set the precision using error estimates. We then test the relative merits of reactive and proactive adaptation on several examples arising as homotopies for solving systems of polynomial equations.