Results 1  10
of
42
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 705 (17 self)
 Add to MetaCart
(Show Context)
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
Datasparse approximation to a class of operatorvalued functions
 Math.Comp.74
"... ..."
(Show Context)
Lowrank Kroneckerproduct approximation to multidimensional nonlocal operators
 II. HKT representation of certain operators // Computing. 2006. V
"... The Kronecker tensorproduct approximation combined with the Hmatrix techniques provides an efficient tool to represent integral operators as well as certain functions F (A) of a discrete elliptic operator A in Rd with a high spatial dimension d. In particular, we approximate the functions A−1 and ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
The Kronecker tensorproduct approximation combined with the Hmatrix techniques provides an efficient tool to represent integral operators as well as certain functions F (A) of a discrete elliptic operator A in Rd with a high spatial dimension d. In particular, we approximate the functions A−1 and sign(A) of a finite difference discretisation A ∈ RN×N with a rather general location of the spectrum. The asymptotic complexity of our datasparse representations can be estimated by O(np logq n), p = 1, 2, with q independent of d, where n = N1/d is the dimension of the discrete problem in one space direction. In this paper (Part I), we discuss several methods of a separable approximation of multivariate functions. Such approximations provide the base for a tensorproduct representation of operators. We discuss the asymptotically optimal sinc quadratures and sinc interpolation methods as well as the best approximations by exponential sums. These tools will be applied in Part II continuing this paper to the problems mentioned above.
Hierarchical tensorproduct approximation to the inverse and related operators for highdimensional elliptic problems
 Computing
"... The class of Hmatrices allows an approximate matrix arithmetic with almost linear complexity. In the present paper, we apply the Hmatrix technique combined with the Kronecker tensorproduct approximation (cf. [2, 20]) to represent the inverse of a discrete elliptic operator in a hypercube (0, 1)d ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
(Show Context)
The class of Hmatrices allows an approximate matrix arithmetic with almost linear complexity. In the present paper, we apply the Hmatrix technique combined with the Kronecker tensorproduct approximation (cf. [2, 20]) to represent the inverse of a discrete elliptic operator in a hypercube (0, 1)d ∈ R d in the case of a high spatial dimension d. In this datasparse format, we also represent the operator exponential, the fractional power of an elliptic operator as well as the solution operator of the matrix LyapunovSylvester equation. The complexity of our approximations can be estimated by O(dn logq n), where N = nd is the discrete problem size.
Approximate Iterations for Structured Matrices
, 2005
"... Important matrixvalued functions f(A) are, e.g., the inverse A −1, the square root √ A, the sign function and the exponent. Their evaluation for large matrices arising from pdes is not an easy task and needs techniques exploiting appropriate structures of the matrices A and f(A) (often f(A) possess ..."
Abstract

Cited by 19 (10 self)
 Add to MetaCart
(Show Context)
Important matrixvalued functions f(A) are, e.g., the inverse A −1, the square root √ A, the sign function and the exponent. Their evaluation for large matrices arising from pdes is not an easy task and needs techniques exploiting appropriate structures of the matrices A and f(A) (often f(A) possesses this structure only approximately). However, intermediate matrices arising during the evaluation may lose the structure of the initial matrix. This would make the computations inefficient and even infeasible. However, the main result of this paper is that an iterative fixedpoint like process for the evaluation of f(A) can be transformed, under certain general assumptions, into another process which preserves the convergence rate and benefits from the underlying structure. It is shown how this result applies to matrices in a tensor format with a bounded tensor rank and to the structure of the hierarchical matrix technique. We demonstrate our results by verifying all requirements in the case of the iterative computation of A −1 and √ A.
Tensorsstructured numerical methods in scientific computing: Survey on recent advances
 Chemometrics and Intelligent Laboratory Systems
, 2011
"... In the present paper, we give a survey of the recent results and outline future prospects of the tensorstructured numerical methods in applications to multidimensional problems in scientific computing. The guiding principle of the tensor methods is an approximation of multivariate functions and ope ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
(Show Context)
In the present paper, we give a survey of the recent results and outline future prospects of the tensorstructured numerical methods in applications to multidimensional problems in scientific computing. The guiding principle of the tensor methods is an approximation of multivariate functions and operators relying on certain separation of variables. Along with the traditional canonical and Tucker models, we focus on the recent quanticsTT tensor approximation method that allows to represent Nd tensors with logvolume complexity, O(d logN). We outline how these methods can be applied in the framework of tensor truncated iteration for the solution of the highdimensional elliptic/parabolic equations and parametric PDEs. Numerical examples demonstrate that the tensorstructured methods have proved their value in application to various computational problems arising in quantum chemistry and in the multidimensional/parametric FEM/BEM modeling—the tool apparently works and gives the promise for future use in challenging highdimensional applications. AMS Subject Classification: 65F30, 65F50, 65N35, 65F10 Key words: Highdimensional problems, rank structured tensor approximation, quantics folding of vectors, matrix valued functions, FEM/BEM, computational quantum chemistry, stochastic PDEs. 1
Swamp reducing technique for tensor decompositions, submitted for publication
, 2008
"... There are numerous applications of tensor analysis in signal processing, such as, blind multiuser separationequalizationdetection and blind identification. As the applicability of tensor analysis widens, the numerical techniques must improve to accommodate new data. We present a new numerical me ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
There are numerous applications of tensor analysis in signal processing, such as, blind multiuser separationequalizationdetection and blind identification. As the applicability of tensor analysis widens, the numerical techniques must improve to accommodate new data. We present a new numerical method for tensor analysis. The method is based on the iterated Tikhonov regularization and a parameter choice rule. Together these elements dramatically accelerate the wellknown Alternating LeastSquares method. 1.
Lowrank Kronecker product approximation to multidimensional nonlocal operators. Part I. Separable approximation of multivariate functions
 Preprint 29, MaxPlanckInstitut für Mathematik in den Naturwissenschaften, Leipzig
, 2005
"... This article is the second part continuing Part I [16]. We apply theHmatrix techniques combined with the Kronecker tensorproduct approximation to represent integral operators as well as certain functions F (A) of a discrete elliptic operator A in a hypercube (0, 1)d ∈ Rd in the case of a high spat ..."
Abstract

Cited by 17 (13 self)
 Add to MetaCart
(Show Context)
This article is the second part continuing Part I [16]. We apply theHmatrix techniques combined with the Kronecker tensorproduct approximation to represent integral operators as well as certain functions F (A) of a discrete elliptic operator A in a hypercube (0, 1)d ∈ Rd in the case of a high spatial dimension d. We focus on the approximation of the operatorvalued functions A−σ, σ> 0, and sign(A) for a class of finite difference discretisations A ∈ RN×N. The asymptotic complexity of our datasparse representations can be estimated by O(np logq n), p = 1, 2, with q independent of d, where n = N1/d is the dimension of the discrete problem in one space direction.
Multigrid accelerated tensor approximation of function related multidimensional arrays
 SIAM J. Sci. Comput
"... Abstract. In this paper, we describe and analyze a novel tensor approximation method for discretized multidimensional functions and operators in R d, based on the idea of multigrid acceleration. The approach stands on successive reiterations of the orthogonal Tucker tensor approximation on a sequenc ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we describe and analyze a novel tensor approximation method for discretized multidimensional functions and operators in R d, based on the idea of multigrid acceleration. The approach stands on successive reiterations of the orthogonal Tucker tensor approximation on a sequence of nested refined grids. On the one hand, it provides a good initial guess for the nonlinear iterations to find the approximating subspaces on finer grids; on the other hand, it allows us to transfer from the coarsetofine grids the important data structure information on the location of the socalled most important fibers in directional unfolding matrices. The method indicates linear complexity with respect to the size of data representing the input tensor. In particular, if the target tensor is given by using the rankR canonical model, then our approximation method is proved to have linear scaling in the univariate grid size n and in the input rank R. Themethodis tested by threedimensional (3D) electronic structure calculations. For the multigrid accelerated low Tuckerrank approximation of the all electron densities having strong nuclear cusps, we obtain high resolution of their 3D convolution product with the Newton potential. The accuracy of order 10 −6 in maxnorm is achieved on large n × n × n grids up to n =1.6 · 10 4,withthetimescaleinseveral minutes.