Results 11  20
of
100
DECOMPOSITION OF HOMOGENEOUS POLYNOMIALS WITH LOW RANK
 MATHEMATISCHE ZEITSCHRIFT
, 2011
"... Let F be a homogeneous polynomial of degree d in m + 1 variables defined over an algebraically closed field of characteristic 0 and suppose that F belongs to the sth secant variety of the duple Veronese embedding of Pm into P (m+d d)−1 but that its minimal decomposition as a sum of dth powers of ..."
Abstract

Cited by 19 (13 self)
 Add to MetaCart
Let F be a homogeneous polynomial of degree d in m + 1 variables defined over an algebraically closed field of characteristic 0 and suppose that F belongs to the sth secant variety of the duple Veronese embedding of Pm into P (m+d d)−1 but that its minimal decomposition as a sum of dth powers of linear forms M1,..., Mr is F = M d 1 + · · ·+M d r with r> s. We show that if s+r ≤ 2d+1 then such a decomposition of F can be split in two parts: one of them is made by linear forms that can be written using only two variables, the other part is uniquely determined once one has fixed the first part. We also obtain a uniqueness theorem for the minimal decomposition of F if r is at most d and a mild condition is satisfied.
QuasiNewton methods on Grassmannians and multilinear approximations of tensors
, 2009
"... Abstract. In this paper we proposed quasiNewton and limited memory quasiNewton methods for objective functions defined on Grassmann manifolds or a product of Grassmann manifolds. Specifically we defined bfgs and lbfgs updates in local and global coordinates on Grassmann manifolds or a product of ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we proposed quasiNewton and limited memory quasiNewton methods for objective functions defined on Grassmann manifolds or a product of Grassmann manifolds. Specifically we defined bfgs and lbfgs updates in local and global coordinates on Grassmann manifolds or a product of these. We proved that, when local coordinates are used, our bfgs updates on Grassmann manifolds share the same optimality property as the usual bfgs updates on Euclidean spaces. When applied to the best multilinear rank approximation problem for general and symmetric tensors, our approach yields fast, robust, and accurate algorithms that exploit the special Grassmannian structure of the respective problems, and which work on tensors of large dimensions and arbitrarily high order. Extensive numerical experiments are included to substantiate our claims. Key words. Grassmann manifold, Grassmannian, product of Grassmannians, Grassmann quasiNewton, Grassmann bfgs, Grassmann lbfgs, multilinear rank, symmetric multilinear rank, tensor, symmetric tensor, approximations
RANKS OF TENSORS AND AND A GENERALIZATION OF SECANT VARIETIES
"... Abstract. We investigate differences between Xrank and Xborder rank, focusing on the cases of tensors and partially symmetric tensors. As an aid to our study, and as an object of interest in its own right, we define notions of Xrank and border rank for a linear subspace. Results include determini ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We investigate differences between Xrank and Xborder rank, focusing on the cases of tensors and partially symmetric tensors. As an aid to our study, and as an object of interest in its own right, we define notions of Xrank and border rank for a linear subspace. Results include determining and bounding the maximum Xrank of points in several cases of interest. 1.
SUBTRACTING A BEST RANK1 APPROXIMATION MAY INCREASE TENSOR RANK
"... Is has been shown that a best rankR approximation of an orderk tensor may not exist when R ≥ 2 and k ≥ 3. This poses a serious problem to data analysts using Candecomp/Parafac and related models. It has been observed numerically that, generally, this issue cannot be solved by consecutively computi ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
(Show Context)
Is has been shown that a best rankR approximation of an orderk tensor may not exist when R ≥ 2 and k ≥ 3. This poses a serious problem to data analysts using Candecomp/Parafac and related models. It has been observed numerically that, generally, this issue cannot be solved by consecutively computing and substracting best rank1 approximations. The reason for this is that subtracting a best rank1 approximation generally does not decrease tensor rank. In this paper, we provide a mathematical treatment of this property for realvalued 2 × 2 × 2 tensors, with symmetric tensors as a special case. Regardless of the symmetry, we show that for generic 2 × 2 × 2 tensors (which have rank 2 or 3), subtracting a best rank1 approximation will result in a tensor that has rank 3 and lies on the boundary between the rank2 and rank3 sets. Hence, for a typical tensor of rank 2, subtracting a best rank1 approximation has increased the tensor rank.
Nonnegative tensor factorization, completely positive tensors and an Hierarchically elimination algorithm
, 2013
"... ar ..."
(Show Context)
Multiarray Signal Processing: Tensor decomposition meets compressed sensing
, 2009
"... We discuss how recently discovered techniques and tools from compressed sensing can be used in tensor decompositions, with a view towards modeling signals from multiple arrays of multiple sensors. We show that with appropriate bounds on coherence, one could always guarantee the existence and uniquen ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
We discuss how recently discovered techniques and tools from compressed sensing can be used in tensor decompositions, with a view towards modeling signals from multiple arrays of multiple sensors. We show that with appropriate bounds on coherence, one could always guarantee the existence and uniqueness of a best rankr approximation of a tensor. In particular, we obtain a computationally feasible variant of Kruskal’s uniqueness condition with coherence as a proxy for krank. We treat sparsest recovery and lowestrank recovery problems in a uniform fashion by considering Schatten and nuclear norms of tensors of arbitrary order and dictionaries that comprise a continuum of uncountably many atoms.
ON THE TENSOR SVD AND OPTIMAL LOW RANK ORTHOGONAL APPROXIMATIONS OF TENSORS ∗
"... Abstract. It is known that a high order tensor does not necessarily have an optimal low rank approximation, and that a tensor might not be orthogonally decomposable (i.e., admit a tensor SVD). We provide several sufficient conditions which lead to the failure of the tensor SVD, and characterize the ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
Abstract. It is known that a high order tensor does not necessarily have an optimal low rank approximation, and that a tensor might not be orthogonally decomposable (i.e., admit a tensor SVD). We provide several sufficient conditions which lead to the failure of the tensor SVD, and characterize the existence of the tensor SVD with respect to the Higher Order SVD (HOSVD) of a tensor. In face of these difficulties to generalize standard results known in the matrix case to tensors, we consider low rank orthogonal approximations of tensors. The existence of an optimal approximation is theoretically guaranteed under certain conditions, and this optimal approximation yields a tensor decomposition where the diagonal of the core is maximized. We present an algorithm to compute this approximation and analyze its convergence behavior. Key words. multilinear algebra, singular value decomposition, tensor decomposition, low rank approximation AMS subject classifications. 15A69, 15A18
Stratification of the fourth secant variety of Veronese variety via the symmetric rank
 Adv. Pure Appl. Math
"... ar ..."
(Show Context)
Tensors: a Brief Introduction
, 2014
"... Tensor decompositions are at the core of many Blind Source Separation (BSS) algorithms, either explicitly or implicitly. In particular, the Canonical Polyadic (CP) tensor ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Tensor decompositions are at the core of many Blind Source Separation (BSS) algorithms, either explicitly or implicitly. In particular, the Canonical Polyadic (CP) tensor