Results 1  10
of
217
Generalized LowRank Approximations
"... We study the frequent problem of approximating a target matrix with a matrix of lower rank. We provide a simple and efficient (EM) algorithm for solving weighted low rank approximation problems, which, unlike simple matrix factorization problems, do not admit a closed form solution in general. W ..."
Abstract
 Add to MetaCart
We study the frequent problem of approximating a target matrix with a matrix of lower rank. We provide a simple and efficient (EM) algorithm for solving weighted low rank approximation problems, which, unlike simple matrix factorization problems, do not admit a closed form solution in general
Generalized LowRank Approximations
, 2003
"... We study the frequent problem of approximating a target matrix with a matrix of lower rank. We provide a simple and efficient (EM) algorithm for solving weighted low rank approximation problems, which, unlike simple matrix factorization problems, do not admit a closed form solution in general. We ..."
Abstract
 Add to MetaCart
We study the frequent problem of approximating a target matrix with a matrix of lower rank. We provide a simple and efficient (EM) algorithm for solving weighted low rank approximation problems, which, unlike simple matrix factorization problems, do not admit a closed form solution in general
Speeding up Distributed Lowrank Matrix Factorization
"... AbstractDistributed solution for solving lowrank matrix factorization (LMF), an important problem in recommendation system, has recently been studied a lot in order to better deal with the exploding data under the context of Big Data. Stochastic gradient descent is a general technique to solve a ..."
Abstract
 Add to MetaCart
AbstractDistributed solution for solving lowrank matrix factorization (LMF), an important problem in recommendation system, has recently been studied a lot in order to better deal with the exploding data under the context of Big Data. Stochastic gradient descent is a general technique to solve a
Reconstruction of a lowrank matrix in the presence of Gaussian noise
 J. Mult. Anal
, 2013
"... In this paper we study the problem of reconstruction of a lowrank matrix observed with additive Gaussian noise. First we show that under mild assumptions (about the prior distribution of the signal matrix) we can restrict our attention to reconstruction methods that are based on the singular value ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
In this paper we study the problem of reconstruction of a lowrank matrix observed with additive Gaussian noise. First we show that under mild assumptions (about the prior distribution of the signal matrix) we can restrict our attention to reconstruction methods that are based on the singular
Nuclear norm penalization and optimal rates for noisy low rank matrix completion.
 Annals of Statistics,
, 2011
"... AbstractThis paper deals with the trace regression model where n entries or linear combinations of entries of an unknown m1 × m2 matrix A0 corrupted by noise are observed. We propose a new nuclear norm penalized estimator of A0 and establish a general sharp oracle inequality for this estimator for ..."
Abstract

Cited by 107 (7 self)
 Add to MetaCart
AbstractThis paper deals with the trace regression model where n entries or linear combinations of entries of an unknown m1 × m2 matrix A0 corrupted by noise are observed. We propose a new nuclear norm penalized estimator of A0 and establish a general sharp oracle inequality for this estimator
Diagonal and LowRank Decompositions and Fitting Ellipsoids to Random Points
"... Abstract — Identifying a subspace containing signals of interest in additive noise is a basic system identification problem. Under natural assumptions, this problem is known as the Frisch scheme and can be cast as decomposing an n × n positive definite matrix as the sum of an unknown diagonal matri ..."
Abstract
 Add to MetaCart
matrix (the noise covariance) and an unknown lowrank matrix (the signal covariance). Our focus in this paper is a natural class of random instances, where the lowrank matrix has a uniformly distributed random column space. In this setting we analyze the behavior of a wellknown convex optimization
Robust matrix factorization with unknown noise
 In ICCV
, 2013
"... Many problems in computer vision can be posed as recovering a lowdimensional subspace from highdimensional visual data. Factorization approaches to lowrank subspace estimation minimize a loss function between an observed measurement matrix and a bilinear factorization. Most popular loss function ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
functions include the L2 and L1 losses. L2 is optimal for Gaussian noise, while L1 is for Laplacian distributed noise. However, real data is often corrupted by an unknown noise distribution, which is unlikely to be purely Gaussian or Laplacian. To address this problem, this paper proposes a lowrank matrix
Coil sensitivity encoding for fast MRI. In:
 Proceedings of the ISMRM 6th Annual Meeting,
, 1998
"... New theoretical and practical concepts are presented for considerably enhancing the performance of magnetic resonance imaging (MRI) by means of arrays of multiple receiver coils. Sensitivity encoding (SENSE) is based on the fact that receiver sensitivity generally has an encoding effect complementa ..."
Abstract

Cited by 193 (3 self)
 Add to MetaCart
matrix X is minimized under condition . [16] 954 Pruessmann et al. In this case the image noise matrix reads The reconstruction formulae The limitations of weak reconstruction may be understood by considering Dirac distributions as ideal voxel functions: where r denotes the center of the th voxel
A Bayesian Approach for Noisy Matrix Completion: Optimal Rate under General Sampling Distribution The
, 2014
"... Bayesian methods for lowrank matrix completion with noise have been shown to be very efficient computationally [3, 17, 18, 23, 26]. While the behaviour of penalized minimization methods is well understood both from the theoretical and computational points of view (see [7, 9, 16, 22] among others) i ..."
Abstract
 Add to MetaCart
Bayesian methods for lowrank matrix completion with noise have been shown to be very efficient computationally [3, 17, 18, 23, 26]. While the behaviour of penalized minimization methods is well understood both from the theoretical and computational points of view (see [7, 9, 16, 22] among others
JMLR: Workshop and Conference Proceedings vol 40:1–20, 2015 Low Rank Matrix Completion with Exponential Family Noise
"... The matrix completion problem consists in reconstructing a matrix from a sample of entries, possibly observed with noise. A popular class of estimator, known as nuclear norm penalized estimators, are based on minimizing the sum of a data fitting term and a nuclear norm penalization. Here, we invest ..."
Abstract
 Add to MetaCart
investigate the case where the noise distribution belongs to the exponential family and is subexponential. Our framework allows for a general sampling scheme. We first consider an estimator defined as the minimizer of the sum of a loglikelihood term and a nuclear norm penalization and prove an upper bound
Results 1  10
of
217