Results 1  10
of
85
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 723 (18 self)
 Add to MetaCart
(Show Context)
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
Hierarchical singular value decomposition of tensors
 SIAM Journal on Matrix Analysis and Applications
"... Abstract. We define the hierarchical singular value decomposition (SVD) for tensors of order d ≥ 2. This hierarchical SVD has properties like the matrix SVD (and collapses to the SVD in d = 2), and we prove these. In particular, one can find low rank (almost) best approximations in a hierarchical fo ..."
Abstract

Cited by 178 (11 self)
 Add to MetaCart
(Show Context)
Abstract. We define the hierarchical singular value decomposition (SVD) for tensors of order d ≥ 2. This hierarchical SVD has properties like the matrix SVD (and collapses to the SVD in d = 2), and we prove these. In particular, one can find low rank (almost) best approximations in a hierarchical format (HTucker) which requires only O((d − 1)k3 + dnk) parameters, where d is the order of the tensor, n the size of the modes and k the (hierarchical) rank. The HTucker format is a specialization of the Tucker format and it contains as a special case all (canonical) rank k tensors. Based on this new concept of a hierarchical SVD we present algorithms for hierarchical tensor calculations allowing for a rigorous error analysis. The complexity of the truncation (finding lower rank approximations to hierarchical rank k tensors) is in O((d−1)k4+dnk2) and the attainable accuracy is just 2–3 digits less than machine precision.
Algorithms for numerical analysis in high dimensions
 SIAM J. SCI. COMPUT
, 2005
"... Nearly every numerical analysis algorithm has computational complexity that scales exponentially in the underlying physical dimension. The separated representation, introduced previously, allows many operations to be performed with scaling that is formally linear in the dimension. In this paper we ..."
Abstract

Cited by 90 (11 self)
 Add to MetaCart
Nearly every numerical analysis algorithm has computational complexity that scales exponentially in the underlying physical dimension. The separated representation, introduced previously, allows many operations to be performed with scaling that is formally linear in the dimension. In this paper we further develop this representation by: (i) discussing the variety of mechanisms that allow it to be surprisingly efficient; (ii) addressing the issue of conditioning; (iii) presenting algorithms for solving linear systems within this framework; and (iv) demonstrating methods for dealing with antisymmetric functions, as arise in the multiparticle Schrödinger equation in quantum mechanics. Numerical examples are given.
Hierarchical Kronecker tensorproduct approximations
 MATHEMATIK IN DEN NATURWISSENSCHAFTEN, LEIPZIG, PREPRINT NO
, 2003
"... The goal of this work is the presentation of some new formats which are useful for the approximation of (large and dense) matrices related to certain classes of functions and nonlocal (integral, integrodifferential) operators, especially for highdimensional problems. These new formats elaborate on ..."
Abstract

Cited by 42 (23 self)
 Add to MetaCart
The goal of this work is the presentation of some new formats which are useful for the approximation of (large and dense) matrices related to certain classes of functions and nonlocal (integral, integrodifferential) operators, especially for highdimensional problems. These new formats elaborate on a sum of few terms of Kronecker products of smallersized matrices (cf. [34, 35]). In addition to this we need that the Kronecker factors possess a certain datasparse structure. Depending on the construction of the Kronecker factors we are led to socalled “profilelowrank matrices” or hierarchical matrices (cf. [17, 18]). We give a proof for the existence of such formats and expound a gainful combination of the Kroneckertensorproduct structure and the arithmetic for hierarchical matrices.
On Approximation of Functions by Exponential Sums
, 2005
"... We introduce a new approach, and associated algorithms, for the efficient approximation of functions and sequences by short linear combinations of exponential functions with complexvalued exponents and coefficients. These approximations are obtained for a finite but arbitrary accuracy and typically ..."
Abstract

Cited by 40 (7 self)
 Add to MetaCart
We introduce a new approach, and associated algorithms, for the efficient approximation of functions and sequences by short linear combinations of exponential functions with complexvalued exponents and coefficients. These approximations are obtained for a finite but arbitrary accuracy and typically have significantly fewer terms than Fourier representations. We present several examples of these approximations and discuss applications to fast algorithms. In particular, we show how to obtain a short separated representation (sum of products of onedimensional functions) of certain multidimensional Green’s functions.
TENSORCUR DECOMPOSITIONS FOR TENSORBASED DATA
 SIAM J. MATRIX ANAL. APPL.
, 2008
"... Motivated by numerous applications in which the data may be modeled by a variable subscripted by three or more indices, we develop a tensorbased extension of the matrix CUR decomposition. The tensorCUR decomposition is most relevant as a data analysis tool when the data consist of one mode that i ..."
Abstract

Cited by 36 (10 self)
 Add to MetaCart
Motivated by numerous applications in which the data may be modeled by a variable subscripted by three or more indices, we develop a tensorbased extension of the matrix CUR decomposition. The tensorCUR decomposition is most relevant as a data analysis tool when the data consist of one mode that is qualitatively different from the others. In this case, the tensorCUR decomposition approximately expresses the original data tensor in terms of a basis consisting of underlying subtensors that are actual data elements and thus that have a natural interpretation in terms of the processes generating the data. Assume the data may be modeled as a (2+1)tensor, i.e., an m×n×p tensor A in which the first two modes are similar and the third is qualitatively different. We refer to each of the p different m × n matrices as “slabs ” and each of the mn different pvectors as “fibers.” In this case, the tensorCUR algorithm computes an approximation to the data tensor A that is of the form CUR, where C is an m×n×c tensor consisting of a small number c of the slabs, R is an r × p matrix consisting of a small number r of the fibers, and U is an appropriately defined and easily computed c × r encoding matrix. Both C and R may be chosen by randomly sampling either slabs or fibers according to a judiciously chosen and datadependent probability distribution, and both c and r depend on a rank parameter k, an error parameter ɛ, and a failure probability δ. Under
A randomized algorithm for a tensorbased generalization of the singular value decomposition
, 2007
"... ..."
(Show Context)
Wave propagation using bases for bandlimited functions
 Wave Motion
, 2005
"... We develop a twodimensional solver for the acoustic wave equation with spatially varying coefficients. In what is a new approach, we use a basis of approximate prolate spheroidal wavefunctions and construct derivative operators that incorporate boundary and interface conditions. Writing the wave eq ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
(Show Context)
We develop a twodimensional solver for the acoustic wave equation with spatially varying coefficients. In what is a new approach, we use a basis of approximate prolate spheroidal wavefunctions and construct derivative operators that incorporate boundary and interface conditions. Writing the wave equation as a firstorder system, we evolve the equation in time using the matrix exponential. Computation of the matrix exponential requires efficient representation of operators in two dimensions and for this purpose we use short sums of onedimensional operators. We also use a partitioned lowrank representation in one dimension to further speed up the algorithm. We demonstrate that the method significantly reduces numerical dispersion and computational time when compared with a fourthorder finite difference scheme in space and an explicit fourthorder Runge–Kutta solver in time.