Results 1  10
of
27
Statistical Performance of Convex Tensor Decomposition
"... We analyze the statistical performance of a recently proposed convex tensor decomposition algorithm. Conventionally tensor decomposition has been formulated as nonconvex optimization problems, which hindered the analysis of their performance. We show under some conditions that the mean squared erro ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
(Show Context)
We analyze the statistical performance of a recently proposed convex tensor decomposition algorithm. Conventionally tensor decomposition has been formulated as nonconvex optimization problems, which hindered the analysis of their performance. We show under some conditions that the mean squared error of the convex method scales linearly with the quantity we call the normalized rank of the true tensor. The current analysis naturally extends the analysis of convex lowrank matrix estimation to tensors. Furthermore, we show through numerical experiments that our theory can precisely predict the scaling behaviour in practice. 1
Square deal: Lower bounds and improved relaxations for tensor recovery
 CoRR
"... Recovering a lowrank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially subopt ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
Recovering a lowrank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a Kway tensor of length n and Tucker rank r from Gaussian measurements requires Ω(rnK−1) observations. In contrast, a certain (intractable) nonconvex formulation needs only O(rK+nrK) observations. We introduce a very simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with O(rbK/2cndK/2e) observations. While these results pertain to Gaussian measurements, simulations strongly suggest that the new norm also outperforms the sum of nuclear norms for tensor completion from a random subset of entries. Our lower bound for the sumofnuclearnorms model follows from a new result on recovering signals with multiple sparse structures (e.g. sparse, low rank), which perhaps surprisingly demonstrates the significant suboptimality of the commonly used recovery approach via minimizing the sum of individual sparsity inducing norms (e.g. l1, nuclear norm). Our new formulation for lowrank tensor recovery however opens the possibility in reducing the sample complexity by exploiting several structures jointly. 1
Convex tensor decomposition via structured Schatten norm regularization
 IN ADVANCES IN NIPS 26
, 2013
"... We study a new class of structured Schatten norms for tensors that includes two recently proposed norms (“overlapped” and “latent”) for convexoptimizationbased tensor decomposition. We analyze the performance of “latent” approach for tensor decomposition, which was empirically found to perform bet ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
We study a new class of structured Schatten norms for tensors that includes two recently proposed norms (“overlapped” and “latent”) for convexoptimizationbased tensor decomposition. We analyze the performance of “latent” approach for tensor decomposition, which was empirically found to perform better than the “overlapped” approach in some settings. We show theoretically that this is indeed the case. In particular, when the unknown true tensor is lowrank in a specific unknown mode, this approach performs as well as knowing the mode with the smallest rank. Along the way, we show a novel duality result for structured Schatten norms, which is also interesting in the general context of structured sparsity. We confirm through numerical simulations that our theory can precisely predict the scaling behaviour of the mean squared error.
Rank regularization and bayesian inference for tensor completion and extrapolation. arXiv preprint arXiv:1301.7619
, 2013
"... factors capturing the tensor’s rank is proposed in this paper, as the key enabler for completion of threeway data arrays with missing entries. Set in a Bayesian framework, the tensor completion method incorporates prior information to enhance its smoothing and prediction capabilities. This probabil ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
factors capturing the tensor’s rank is proposed in this paper, as the key enabler for completion of threeway data arrays with missing entries. Set in a Bayesian framework, the tensor completion method incorporates prior information to enhance its smoothing and prediction capabilities. This probabilistic approach can naturally accommodate general models for the data distribution, lending itself to various fitting criteria that yield optimum estimates in the maximumaposteriori sense. In particular, two algorithms are devised for Gaussian and Poissondistributed data, that minimize the rankregularized leastsquares error and KullbackLeibler divergence, respectively. The proposed technique is able to recover the “groundtruth ” tensor rank when tested on synthetic data, and to complete brain imaging and yeast gene expression datasets with 50 % and 15 % of missing entries respectively, resulting in recovery errors at and. Index Terms—Bayesian inference, lowrank, missing data, Poisson process, tensor. I.
A convergent gradient descent algorithm for rank minimization and semidefinite programming from random linear measurements. arXiv preprint arXiv:1506.06081
, 2015
"... Abstract We propose a simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs. With O(r 3 κ 2 n log n) random measurements of a positive semidefinite n×n matrix of rank r and cond ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract We propose a simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs. With O(r 3 κ 2 n log n) random measurements of a positive semidefinite n×n matrix of rank r and condition number κ, our method is guaranteed to converge linearly to the global optimum.
Tensorbased formulation and nuclear norm regularization for multienergy computed tomography
, 2014
"... ..."
Tensor completion based on nuclear norm minimization for
"... 5D seismic data reconstruction ..."
(Show Context)
Multitask learning meets tensor factorization: task imputation via convex optimization
"... We study a multitask learning problem in which each task is parametrized by a weight vector and indexed by a pair of indices, which can be e.g, (consumer, time). The weight vectors can be collected into a tensor and the (multilinear)rank of the tensor controls the amount of sharing of information a ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We study a multitask learning problem in which each task is parametrized by a weight vector and indexed by a pair of indices, which can be e.g, (consumer, time). The weight vectors can be collected into a tensor and the (multilinear)rank of the tensor controls the amount of sharing of information among tasks. Two types of convex relaxations have recently been proposed for the tensor multilinear rank. However, we argue that both of them are not optimal in the context of multitask learning in which the dimensions or multilinear rank are typically heterogeneous. We propose a new norm, which we call the scaled latent trace norm and analyze the excess risk of all the three norms. The results apply to various settings including matrix and tensor completion, multitask learning, and multilinear multitask learning. Both the theory and experiments support the advantage of the new norm when the tensor is not equalsized and we do not a priori know which mode is low rank. 1