Results 11 -
13 of
13
Error Bounds for Maximum Likelihood Matrix Completion Under Sparse Factor Models
"... Abstract—This paper examines a general class of matrix completion tasks where entry wise observations of the matrix are subject to random noise or corruption. Our particular focus here is on settings where the matrix to be estimated follows a sparse factor model, in the sense that it may be expresse ..."
Abstract
- Add to MetaCart
Abstract—This paper examines a general class of matrix completion tasks where entry wise observations of the matrix are subject to random noise or corruption. Our particular focus here is on settings where the matrix to be estimated follows a sparse factor model, in the sense that it may be expressed as the product of two matrices, one of which is sparse. We analyze the performance of a sparsity-penalized maximum likelihood approach to such problems to provide a general-purpose estimation result applicable to any of a number of noise/corruption models, and describe its implications in two stylized scenarios – one characterized by additive Gaussian noise, and the other by highly-quantized one-bit observations. We also provide some supporting empirical evidence to validate our theoretical claims in the Gaussian setting. Index Terms—Complexity regularization, matrix completion, maxi-mum likelihood, sparse estimation I.
SUBMITTED TO IEEE TRANSACTIONS ON SIGNAL PROCESSING 1 Structured Dictionary Learning for Classification
"... Abstract—Sparsity driven signal processing has gained tremen-dous popularity in the last decade. At its core, the assumption is that the signal of interest is sparse with respect to either a fixed transformation or a signal dependent dictionary. To better cap-ture the data characteristics, various d ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—Sparsity driven signal processing has gained tremen-dous popularity in the last decade. At its core, the assumption is that the signal of interest is sparse with respect to either a fixed transformation or a signal dependent dictionary. To better cap-ture the data characteristics, various dictionary learning methods have been proposed for both reconstruction and classification tasks. For classification particularly, most approaches proposed so far have focused on designing explicit constraints on the sparse code to improve classification accuracy while simply adopting l0-norm or l1-norm for sparsity regularization. Motivated by the success of structured sparsity in the area of Compressed Sensing, we propose a structured dictionary learning framework (StructDL) that incorporates the structure information on both group and task levels in the learning process. Its benefits are two-fold: (i) the label consistency between dictionary atoms and training data are implicitly enforced; and (ii) the classification performance is more robust in the cases of a small dictionary size or limited training data than other techniques. Using the subspace model, we derive the conditions for StructDL to guarantee the performance and show theoretically that StructDL is superior to l0-norm or l1-norm regularized dictionary learning for classification. Extensive experiments have been performed on both synthetic simulations and real world applications, such as face recognition and object classification, to demonstrate the validity of the proposed DL framework. Index Terms—dictionary learning, structured sparsity, sparse representation, compressed sensing, multitask I.