• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 78,895
Next 10 →

The Nature of Statistical Learning Theory

by Vladimir N. Vapnik , 1999
"... Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based on the deve ..."
Abstract - Cited by 13236 (32 self) - Add to MetaCart
Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based

Maximum likelihood from incomplete data via the EM algorithm

by A. P. Dempster, N. M. Laird, D. B. Rubin - JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B , 1977
"... A broadly applicable algorithm for computing maximum likelihood estimates from incomplete data is presented at various levels of generality. Theory showing the monotone behaviour of the likelihood and convergence of the algorithm is derived. Many examples are sketched, including missing value situat ..."
Abstract - Cited by 11972 (17 self) - Add to MetaCart
situations, applications to grouped, censored or truncated data, finite mixture models, variance component estimation, hyperparameter estimation, iteratively reweighted least squares and factor analysis.

A Comparison of New and Old Algorithms for A Mixture Estimation Problem

by David Helmbold, Robert E. Schapire, Yoram Singer, Manfred K. Warmuth - Machine Learning , 1995
"... . We investigate the problem of estimating the proportion vector which maximizes the likelihood of a given sample for a mixture of given densities. We adapt a framework developed for supervised learning and give simple derivations for many of the standard iterative algorithms like gradient projectio ..."
Abstract - Cited by 37 (11 self) - Add to MetaCart
. We investigate the problem of estimating the proportion vector which maximizes the likelihood of a given sample for a mixture of given densities. We adapt a framework developed for supervised learning and give simple derivations for many of the standard iterative algorithms like gradient

Bayesian density estimation and inference using mixtures.

by Michael D Escobar , Mike West - J. Amer. Statist. Assoc. , 1995
"... JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about J ..."
Abstract - Cited by 653 (18 self) - Add to MetaCart
mixtures of normal distributions. Efficient simulation methods are used to approximate various prior, posterior, and predictive distributions. This allows for direct inference on a variety of practical issues, including problems of local versus global smoothing, uncertainty about density estimates

A View Of The Em Algorithm That Justifies Incremental, Sparse, And Other Variants

by Radford Neal, Geoffrey E. Hinton - Learning in Graphical Models , 1998
"... . The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect to the d ..."
Abstract - Cited by 993 (18 self) - Add to MetaCart
estimation problem. A variant of the algorithm that exploits sparse conditional distributions is also described, and a wide range of other variant algorithms are also seen to be possible. 1. Introduction The Expectation-Maximization (EM) algorithm finds maximum likelihood parameter estimates in problems

A gentle tutorial on the EM algorithm and its application to parameter estimation for gaussian mixture and hidden markov models

by Jeff A. Bilmes , 1997
"... We describe the maximum-likelihood parameter estimation problem and how the Expectation-form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) fi ..."
Abstract - Cited by 693 (4 self) - Add to MetaCart
We describe the maximum-likelihood parameter estimation problem and how the Expectation-form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2

Hierarchical mixtures of experts and the EM algorithm

by Michael I. Jordan, Robert A. Jacobs , 1993
"... We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM’s). Learning is treated as a max-imum likelihood ..."
Abstract - Cited by 885 (21 self) - Add to MetaCart
We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM’s). Learning is treated as a max-imum likelihood

Image denoising using a scale mixture of Gaussians in the wavelet domain

by Javier Portilla, Vasily Strela, Martin J. Wainwright, Eero P. Simoncelli - IEEE TRANS IMAGE PROCESSING , 2003
"... We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vecto ..."
Abstract - Cited by 513 (17 self) - Add to MetaCart
vector and a hidden positive scalar multiplier. The latter modulates the local variance of the coefficients in the neighborhood, and is thus able to account for the empirically observed correlation between the coefficient amplitudes. Under this model, the Bayesian least squares estimate of each

Fitting a mixture model by expectation maximization to discover motifs in biopolymers.

by Timothy L Bailey , Charles Elkan - Proc Int Conf Intell Syst Mol Biol , 1994
"... Abstract The algorithm described in this paper discovers one or more motifs in a collection of DNA or protein sequences by using the technique of expect~tiou ma.,dmization to fit a two-component finite mixture model to the set of sequences. Multiple motifs are found by fitting a mixture model to th ..."
Abstract - Cited by 947 (5 self) - Add to MetaCart
Abstract The algorithm described in this paper discovers one or more motifs in a collection of DNA or protein sequences by using the technique of expect~tiou ma.,dmization to fit a two-component finite mixture model to the set of sequences. Multiple motifs are found by fitting a mixture model

Estimating the Support of a High-Dimensional Distribution

by Bernhard Schölkopf, John C. Platt, John Shawe-taylor, Alex J. Smola, Robert C. Williamson , 1999
"... Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We propo ..."
Abstract - Cited by 783 (29 self) - Add to MetaCart
propose a method to approach this problem by trying to estimate a function f which is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length
Next 10 →
Results 1 - 10 of 78,895
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University