Results 1  10
of
15
Numerical solution of saddle point problems
 ACTA NUMERICA
, 2005
"... Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has b ..."
Abstract

Cited by 306 (25 self)
 Add to MetaCart
(Show Context)
Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for solving this type of systems. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.
On the solution of equality constrained quadratic programming problems arising . . .
, 1998
"... ..."
(Show Context)
Predicting Structure In Sparse Matrix Computations
 SIAM J. Matrix Anal. Appl
, 1994
"... . Many sparse matrix algorithmsfor example, solving a sparse system of linear equationsbegin by predicting the nonzero structure of the output of a matrix computation from the nonzero structure of its input. This paper is a catalog of ways to predict nonzero structure. It contains known result ..."
Abstract

Cited by 51 (5 self)
 Add to MetaCart
(Show Context)
. Many sparse matrix algorithmsfor example, solving a sparse system of linear equationsbegin by predicting the nonzero structure of the output of a matrix computation from the nonzero structure of its input. This paper is a catalog of ways to predict nonzero structure. It contains known results for problems including various matrix factorizations, and new results for problems including some eigenvector computations. Key words. sparse matrix algorithms, graph theory, matrix factorization, systems of linear equations, eigenvectors AMS(MOS) subject classifications. 15A18, 15A23, 65F50, 68R10 1. Introduction. A sparse matrix algorithm is an algorithm that performs a matrix computation in such a way as to take advantage of the zero/nonzero structure of the matrices involved. Usually this means not explicitly storing or manipulating some or all of the zero elements; sometimes sparsity can also be exploited to work on different parts of a matrix problem in parallel. Large sparse matr...
Most tensor problems are NP hard
 CORR
, 2009
"... The idea that one might extend numerical linear algebra, the collection of matrix computational methods that form the workhorse of scientific and engineering computing, to numerical multilinear algebra, an analogous collection of tools involving hypermatrices/tensors, appears very promising and has ..."
Abstract

Cited by 42 (6 self)
 Add to MetaCart
The idea that one might extend numerical linear algebra, the collection of matrix computational methods that form the workhorse of scientific and engineering computing, to numerical multilinear algebra, an analogous collection of tools involving hypermatrices/tensors, appears very promising and has attracted a lot of attention recently. We examine here the computational tractability of some core problems in numerical multilinear algebra. We show that tensor analogues of several standard problems that are readily computable in the matrix (i.e. 2tensor) case are NP hard. Our list here includes: determining the feasibility of a system of bilinear equations, determining an eigenvalue, a singular value, or the spectral norm of a 3tensor, determining a best rank1 approximation to a 3tensor, determining the rank of a 3tensor over R or C. Hence making tensor computations feasible is likely to be a challenge.
ON THE COMPUTATION OF NULL SPACES OF SPARSE RECTANGULAR MATRICES
"... Abstract. Computing the null space of a sparse matrix, sometimes a rectangular sparse matrix, is an important part of some computations, such as embeddings and parametrization of meshes. We propose an efficient and reliable method to compute an orthonormal basis of the null space of a sparse square ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Computing the null space of a sparse matrix, sometimes a rectangular sparse matrix, is an important part of some computations, such as embeddings and parametrization of meshes. We propose an efficient and reliable method to compute an orthonormal basis of the null space of a sparse square or rectangular matrix (usually with more rows than columns). The main computational component in our method is a sparse LU factorization with partial pivoting of the input matrix; this factorization is significantly cheaper than the QR factorization used in previous methods. The paper analyzes important theoretical aspects of the new method and demonstrates experimentally that it is efficient and reliable. 1.
Supersparse black box rational function interpolation
 Manuscript
, 2011
"... We present a method for interpolating a supersparse blackbox rational function with rational coefficients, for example, a ratio of binomials or trinomials with very high degree. We input a blackbox rational function, as well as an upper bound on the number of nonzero terms and an upper bound on the ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
We present a method for interpolating a supersparse blackbox rational function with rational coefficients, for example, a ratio of binomials or trinomials with very high degree. We input a blackbox rational function, as well as an upper bound on the number of nonzero terms and an upper bound on the degree. The result is found by interpolating the rational function modulo a small prime p, and then applying an effective version of Dirichlet’s Theorem on primes in an arithmetic progression progressively lift the result to larger primes. Eventually we reach a prime number that is larger than the inputted degree bound and we can recover the original function exactly. In a variant, the initial prime p is large, but the exponents of the terms are known modulo larger and larger factors of p − 1. The algorithm, as presented, is conjectured to be polylogarithmic in the degree, but exponential in the number of terms. Therefore, it is very effective for rational functions with a small number of nonzero terms, such as the ratio of binomials, but it quickly becomes ineffective for a high number of terms. The algorithm is oblivious to whether the numerator and denominator have a common factor. The algorithm will recover the sparse form of the rational function, rather than the reduced form, which could be dense. We have experimentally tested the algorithm in the case of under 10 terms in numerator and denominator combined and observed its conjectured high efficiency.
Incremental methods for simple problems in time series: algorithms and experiments
 In Ninth International Database Engineering and Applications Symposium (IDEAS 2005
, 2005
"... A time series (or equivalently a data stream) consists of data arriving in time order. Single or multiple data streams arise in fields including physics, finance, medicine, and music, to name a few. Often the data comes from sensors (in physics and medicine for example) whose data rates continue to ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
A time series (or equivalently a data stream) consists of data arriving in time order. Single or multiple data streams arise in fields including physics, finance, medicine, and music, to name a few. Often the data comes from sensors (in physics and medicine for example) whose data rates continue to improve dramatically as sensor technology improves and as the number of sensors increases. So fast algorithms become ever more critical in order to distill knowledge from the data. This paper presents our recent work regarding the incremental computation of various primitives: windowed correlation, matching pursuit, sparse null space discovery and elastic burst detection. The incremental idea reflects the fact that recent data is more important than older data. Our StatStream system contains an implementation of these algorithms, permitting us to do empirical studies on both simulated and real data. 1
Algorithm 933: Reliable Calculation of Numerical Rank, Null Space Bases, Pseudoinverse Solutions, and Basic Solutions using SuiteSparseQR
"... The SPQR RANK package contains routines that calculate the numerical rank of large, sparse, numerically rankdeficient matrices. The routines can also calculate orthonormal bases for numerical null spaces, approximate pseudoinverse solutions to least squares problems involving rankdeficient matric ..."
Abstract
 Add to MetaCart
(Show Context)
The SPQR RANK package contains routines that calculate the numerical rank of large, sparse, numerically rankdeficient matrices. The routines can also calculate orthonormal bases for numerical null spaces, approximate pseudoinverse solutions to least squares problems involving rankdeficient matrices, and basic solutions to these problems. The algorithms are based on SPQR from SuiteSparseQR (ACM Transactions on Mathematical Software 38, Article 8, 2011). SPQR is a highperformance routine for forming QR factorizations of large, sparse matrices. It returns an estimate for the numerical rank that is usually, but not always, correct. The new routines improve the accuracy of the numerical rank calculated by SPQR and reliably determine the numerical rank in the sense that, based on extensive testing with matrices from applications, the numerical rank is almost always accurately determined when our methods report that the numerical rank should be correct. Reliable determination of numerical rank is critical to the other calculations in the package. The routines work well for matrices with either small or large null space dimensions.