Results 1  10
of
136,418
Unsupervised estimation for noisychannel models
 In Proceedings of International Conference on Machine Learning (ICML’07
, 2007
"... Shannon’s NoisyChannel model, which describes how a corrupted message might be reconstructed, has been the corner stone for much work in statistical language and speech processing. The model factors into two components: a language model to characterize the original message and a channel model to de ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Shannon’s NoisyChannel model, which describes how a corrupted message might be reconstructed, has been the corner stone for much work in statistical language and speech processing. The model factors into two components: a language model to characterize the original message and a channel model
Object class recognition by unsupervised scaleinvariant learning
 In CVPR
, 2003
"... We present a method to learn and recognize object class models from unlabeled and unsegmented cluttered scenes in a scale invariant manner. Objects are modeled as flexible constellations of parts. A probabilistic representation is used for all aspects of the object: shape, appearance, occlusion and ..."
Abstract

Cited by 1124 (49 self)
 Add to MetaCart
and relative scale. An entropybased feature detector is used to select regions and their scale within the image. In learning the parameters of the scaleinvariant object model are estimated. This is done using expectationmaximization in a maximumlikelihood setting. In recognition, this model is used in a
Unsupervised learning of finite mixture models
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2002
"... This paper proposes an unsupervised algorithm for learning a finite mixture model from multivariate data. The adjective ªunsupervisedº is justified by two properties of the algorithm: 1) it is capable of selecting the number of components and 2) unlike the standard expectationmaximization (EM) alg ..."
Abstract

Cited by 415 (22 self)
 Add to MetaCart
This paper proposes an unsupervised algorithm for learning a finite mixture model from multivariate data. The adjective ªunsupervisedº is justified by two properties of the algorithm: 1) it is capable of selecting the number of components and 2) unlike the standard expectationmaximization (EM
Estimating the Support of a HighDimensional Distribution
, 1999
"... Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We propo ..."
Abstract

Cited by 766 (29 self)
 Add to MetaCart
Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We
Estimating the number of clusters in a dataset via the Gap statistic
, 2000
"... We propose a method (the \Gap statistic") for estimating the number of clusters (groups) in a set of data. The technique uses the output of any clustering algorithm (e.g. kmeans or hierarchical), comparing the change in within cluster dispersion to that expected under an appropriate reference ..."
Abstract

Cited by 492 (1 self)
 Add to MetaCart
principal components. 1 Introduction Cluster analysis is an important tool for \unsupervised" learning the problem of nding groups in data without the help of a response variable. A major challenge in cluster analysis is estimation of the optimal number of \clusters". Figure 1 (top right) shows
Probabilistic Visual Learning for Object Representation
, 1996
"... We present an unsupervised technique for visual learning which is based on density estimation in highdimensional spaces using an eigenspace decomposition. Two types of density estimates are derived for modeling the training data: a multivariate Gaussian (for unimodal distributions) and a Mixtureof ..."
Abstract

Cited by 705 (15 self)
 Add to MetaCart
We present an unsupervised technique for visual learning which is based on density estimation in highdimensional spaces using an eigenspace decomposition. Two types of density estimates are derived for modeling the training data: a multivariate Gaussian (for unimodal distributions) and a Mixture
Improved Statistical Alignment Models
 In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics
, 2000
"... In this paper, we present and compare various singleword based alignment models for statistical machine translation. We discuss the five IBM alignment models, the HiddenMarkov alignment model, smoothing techniques and various modifications. ..."
Abstract

Cited by 593 (13 self)
 Add to MetaCart
In this paper, we present and compare various singleword based alignment models for statistical machine translation. We discuss the five IBM alignment models, the HiddenMarkov alignment model, smoothing techniques and various modifications.
Face Recognition Based on Fitting a 3D Morphable Model
 IEEE Trans. Pattern Anal. Mach. Intell
, 2003
"... Abstract—This paper presents a method for face recognition across variations in pose, ranging from frontal to profile views, and across a wide range of illuminations, including cast shadows and specular reflections. To account for these variations, the algorithm simulates the process of image format ..."
Abstract

Cited by 546 (19 self)
 Add to MetaCart
formation in 3D space, using computer graphics, and it estimates 3D shape and texture of faces from single images. The estimate is achieved by fitting a statistical, morphable model of 3D faces to images. The model is learned from a set of textured 3D scans of heads. We describe the construction
Segmentation of brain MR images through a hidden Markov random field model and the expectationmaximization algorithm
 IEEE TRANSACTIONS ON MEDICAL. IMAGING
, 2001
"... The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogrambased model, the FM has an intrinsic limi ..."
Abstract

Cited by 619 (14 self)
 Add to MetaCart
based methods produce unreliable results. In this paper, we propose a novel hidden Markov random field (HMRF) model, which is a stochastic process generated by a MRF whose state sequence cannot be observed directly but which can be indirectly estimated through observations. Mathematically, it can be shown
An introduction to variational methods for graphical models
 TO APPEAR: M. I. JORDAN, (ED.), LEARNING IN GRAPHICAL MODELS
"... ..."
Results 1  10
of
136,418