Results 1 
2 of
2
Learning Compressible Models
"... Regularization is a principled way to control model complexity, prevent overfitting, and incorporate ancillary information into the learning process. As a convex relaxation of ℓ0norm, ℓ1norm regularization is popular for learning in highdimensional spaces, where a fundamental assumption is the spa ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Regularization is a principled way to control model complexity, prevent overfitting, and incorporate ancillary information into the learning process. As a convex relaxation of ℓ0norm, ℓ1norm regularization is popular for learning in highdimensional spaces, where a fundamental assumption is the sparsity of model parameters. However, model sparsity can be restrictive and not necessarily the most appropriate assumption in many problem domains. In this paper, we relax the sparsity assumption to compressibility and propose learning compressible models: a compression operation can be included into ℓ1regularization and thus model parameters are compressed before being penalized. We concentrate on the design of different model compression transforms, which can encode various assumptions on model parameters, e.g., local smoothness, frequencydomain energy compaction, and correlation. Use of a compression transform inside the ℓ1 penalty term provides an opportunity to include information from domain knowledge, coding theories, unlabeled data, etc. We conduct extensive experiments on braincomputer interface, handwritten character recognition, and text classification. Empirical results show significant improvements
Admixture of Poisson MRFs: A Topic Model with Word Dependencies
"... This paper introduces a new topic model based on an admixture of Poisson Markov Random Fields (APM), which can model dependencies between words as opposed to previous independent topic models such as PLSA (Hofmann, 1999), LDA (Blei et al., 2003) or SAM (Reisinger et al., 2010). We propose a class ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
This paper introduces a new topic model based on an admixture of Poisson Markov Random Fields (APM), which can model dependencies between words as opposed to previous independent topic models such as PLSA (Hofmann, 1999), LDA (Blei et al., 2003) or SAM (Reisinger et al., 2010). We propose a class of admixture models that generalizes previous topic models and show an equivalence between the conditional distribution of LDA and independent Poissons—suggesting that APM subsumes the modeling power of LDA. We present a tractable method for estimating the parameters of an APM based on the pseudo loglikelihood and demonstrate the benefits of APM over previous models by preliminary qualitative and quantitative experiments. 1.