• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

DMCA

A Unifying Review of Linear Gaussian Models (1999)

Cached

  • Download as a PDF

Download Links

  • [www.gatsby.ucl.ac.uk]
  • [www.cs.nyu.edu]
  • [www.cs.toronto.edu]
  • [www.iro.umontreal.ca]
  • [psych.stanford.edu]
  • [www.ee.nthu.edu.tw]
  • [www.ee.nthu.edu.tw]
  • [www.cs.nyu.edu]
  • [mi.eng.cam.ac.uk]
  • [www.cs.nyu.edu]
  • [cs.nyu.edu]
  • [www.iro.umontreal.ca]
  • [authors.library.caltech.edu]
  • [mlg.eng.cam.ac.uk]
  • [www.gatsby.ucl.ac.uk]
  • [ftp.cs.toronto.edu]
  • [www.gatsby.ucl.ac.uk]
  • [www.stat.columbia.edu]
  • [www.stat.columbia.edu]
  • [www.cs.nyu.edu]
  • [www.stat.columbia.edu]
  • [www.cs.toronto.edu]
  • [www.cs.nyu.edu]
  • [www.stat.columbia.edu]
  • [cs.nyu.edu]
  • [www.stat.columbia.edu]

  • Other Repositories/Bibliography

  • DBLP
  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Sam Roweis , Zoubin Ghahramani
Citations:350 - 17 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@MISC{Roweis99aunifying,
    author = {Sam Roweis and Zoubin Ghahramani},
    title = {A Unifying Review of Linear Gaussian Models},
    year = {1999}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model. We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models.

Keyphrases

unifying review    linear gaussian model    basic model    factor analysis    disparate observation    static data    independent component analysis    kalman filter model    single basic generative model    principal component analysis    adaptive observation noise    regularization term    simple nonlinearity    continuous state model    gaussian cluster    sensible principal component analysis    many previous author    hidden markov model    new way    local mixture    vector quantization    autoencoder neural network    novel concept    basic generative model    new model   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University