Results 1  10
of
39
Fitting a mixture model by expectation maximization to discover motifs in biopolymers
 Proceedings of the Second International Conference on Intelligent Systems for Molecular Biology
, 1994
"... ABSTRACT: The algorithm described in this paper discovers one or more motifs in a collection of DNA or protein sequences by using the technique of expectation maximization to fit a twocomponent finite mixture model to the set of sequences. Multiple motifs are found by fitting a twocomponent finite ..."
Abstract

Cited by 520 (4 self)
 Add to MetaCart
ABSTRACT: The algorithm described in this paper discovers one or more motifs in a collection of DNA or protein sequences by using the technique of expectation maximization to fit a twocomponent finite mixture model to the set of sequences. Multiple motifs are found by fitting a twocomponent finite mixture model to the data, probabilistically erasing the occurrences of the motif thus found, and repeating the process to find successive motifs. The algorithm requires only a set of sequences and a number specifying the width of the motifs as input. It returns a model of each motif and a threshold which together can be used as a Bayesoptimal classifier for searching for occurrences of the motif in other databases. The algorithm estimates how many times each motif occurs in the input dataset and outputs an alignment of the occurrences of the motif. The algorithm is capable of discovering several different motifs with differing numbers of occurrences in a single dataset. Motifs are discovered by treating the set of sequences as though they were created by a stochastic process which can be modelled as a mixture of two densities, one of which generated the occurrences of the motif, and the other the rest of the positions in the sequences. Expectation maximization is used to estimate the parameters of the two densities and the mixing
Multiscale Bayesian Segmentation Using a Trainable Context Model
 IEEE Trans. on Image Processing
, 2001
"... In recent years, multiscale Bayesian approaches have attracted increasing attention for use in image segmentation. Generally, these methods tend to offer improved segmentation accuracy with reduced computational burden. Existing Bayesian segmentation methods use simple models of context designed to ..."
Abstract

Cited by 49 (1 self)
 Add to MetaCart
In recent years, multiscale Bayesian approaches have attracted increasing attention for use in image segmentation. Generally, these methods tend to offer improved segmentation accuracy with reduced computational burden. Existing Bayesian segmentation methods use simple models of context designed to encourage large uniformly classified regions. Consequently, these context models have a limited ability to capture the complex contextual dependencies that are important in applications such as document segmentation. In this paper, we propose a multiscale...
Segmentation Of Textured Images Using A Multiresolution Gaussian Autoregressive Model
"... We present a new algorithm for segmentation of textured images using a multiresolution Bayesian approach. The new algorithm uses a multiresolution Gaussian autoregressive (MGAR) model for the pyramid representation of the observed image, and assumes a multiscale Markov random field model for the cla ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
We present a new algorithm for segmentation of textured images using a multiresolution Bayesian approach. The new algorithm uses a multiresolution Gaussian autoregressive (MGAR) model for the pyramid representation of the observed image, and assumes a multiscale Markov random field model for the class label pyramid. Unlike previously proposed Bayesian multiresolution segmentation approaches, which have either used a singleresolution representation of the observed image or implicitly assumed independence between different levels of a multiresolution representation of the observed image, the models used in this paper incorporate correlations between different levels of both the observed image pyramid and the class label pyramid. The criterion used for segmentation is the minimization of the expected value of the number of misclassified nodes in the multiresolution lattice. The estimate which satisfies this criterion is referred to as the "multiresolution maximization of the posterior ma...
Bayesian Model Selection in Finite Mixtures by Marginal Density Decompositions
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2001
"... ..."
Dynamic trees for unsupervised segmentation and matching of image regions
 IEEE TPAMI
, 2005
"... We present a probabilistic framework—namely, multiscale generative models known as Dynamic Trees (DT)—for unsupervised image segmentation and subsequent matching of segmented regions in a given set of images. Beyond these novel applications of DTs, we propose important additions for this modeling p ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
We present a probabilistic framework—namely, multiscale generative models known as Dynamic Trees (DT)—for unsupervised image segmentation and subsequent matching of segmented regions in a given set of images. Beyond these novel applications of DTs, we propose important additions for this modeling paradigm. First, we introduce a novel DT architecture, where multilayered observable data are incorporated at all scales of the model. Second, we derive a novel probabilistic inference algorithm for DTs—Structured Variational Approximation (SVA)—which explicitly accounts for the statistical dependence of node positions and model structure in the approximate posterior distribution, thereby relaxing poorly justified independence assumptions in previous work. Finally, we propose a similarity measure for matching dynamictree models, representing segmented image regions, across images. Our results for several data sets show that DTs are capable of capturing important componentsubcomponent relationships among objects and their parts, and that DTs perform well in segmenting images into plausible pixel clusters. We demonstrate the significantly improved properties of the SVA algorithm—both in terms of substantially faster convergence rates and larger approximate posteriors for the inferred models—when compared with competing inference algorithms. Furthermore, results on unsupervised object recognition demonstrate the viability of the proposed similarity measure for matching dynamicstructure statistical models.
Using samplebased representations under communications constraints
 MIT, Laboratory for Information and Decision Systems
, 2004
"... In many applications, particularly powerconstrained sensor networks, it is important to conserve the amount of data exchanged while maximizing the utility of that data for some inference task. Broadly, this tradeoff has two major cost components—the representation’s size (in distributed networks, t ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
In many applications, particularly powerconstrained sensor networks, it is important to conserve the amount of data exchanged while maximizing the utility of that data for some inference task. Broadly, this tradeoff has two major cost components—the representation’s size (in distributed networks, the communications cost) and the error incurred by its use (the inference cost). We analyze this tradeoff for a particular problem: communicating a particlebased representation (and more generally, a Gaussian mixture or kernel density estimate). We begin by characterizing the exact communication cost of these representations, noting that it is less than might be suggested by traditional communications theory due to the invariance of the representation to reordering. We describe the optimal, lossless encoder when the generating distribution is known, and pose a suboptimal encoder which still benefits from reordering invariance. However, lossless encoding may not be sufficient. We describe one reasonable measure of error for distributionbased messages and its consequences for inference in an acyclic network, and propose a novel density approximation method based on KDtree multiscale representations which enables the communications cost and a bound on error to be balanced efficiently. We show several empirical examples demonstrating the method’s utility in collaborative, distributed signal processing under bandwidth or power constraints. 1
Multiscale Document Segmentation
 in IS&T 50th Annual Conference
, 1997
"... In this paper, we propose a new approach to document segmentation which exploits both local texture characteristics and image structure to segment scanned documents into regions such as text, background, headings and images. Our method is based on the use of a multiscale Bayesian framework. This fra ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
In this paper, we propose a new approach to document segmentation which exploits both local texture characteristics and image structure to segment scanned documents into regions such as text, background, headings and images. Our method is based on the use of a multiscale Bayesian framework. This framework is chosen because it allows accurate modeling of both the image characteristics and contextual structure of each region. The parameters which describe the characteristics of typical images are extracted from a database of training images which are produced by scanning typical documents and hand segmenting them into the desired components. This training procedure is based on the expectation maximization (EM) algorithm and results in approximate maximum likelihood (ML) estimates of the model parameters for region textures and contextual structure at various resolutions. Once the training procedure is performed, scanned documents may be segmented using a finetocoarsetofine procedure that is computationally efficient.
The EM/MPM Algorithm For Segmentation Of Textured Images: Analysis And Further Experimental Results
"... this paper we present new results relative to the "expectationmaximization/maximization of the posterior marginals" (EM/MPM) algorithm for simultaneous parameter estimation and segmentation of textured images. The EM/MPM algorithm uses a Markov random field model for the pixel class labels and alte ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
this paper we present new results relative to the "expectationmaximization/maximization of the posterior marginals" (EM/MPM) algorithm for simultaneous parameter estimation and segmentation of textured images. The EM/MPM algorithm uses a Markov random field model for the pixel class labels and alternately approximates the MPM estimate of the pixel class labels and estimates parameters of the observed image model. The goal of the EM/MPM algorithm is to minimize the expected value of the number of misclassified pixels. We present new theoretical results in this paper which show that the algorithm can be expected to achieve this goal, to the extent that the EM estimates of the model parameters are close to the true values of the model parameters. We also present new experimental results demonstrating the performance of the EM/MPM algorithm. EDICS: IP 1.5
Comparing Complete and Partial Classification for Identifying Customers At Risk
, 2003
"... This paper evaluates complete versus partial classification for the problem of identifying customers at risk. We define customers at risk as customers reporting overall satisfaction, but these customers also possess characteristics that are strongly associated with dissatisfied customers. This defin ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
This paper evaluates complete versus partial classification for the problem of identifying customers at risk. We define customers at risk as customers reporting overall satisfaction, but these customers also possess characteristics that are strongly associated with dissatisfied customers. This definition enables two viable methodological approaches for identifying such customers, i.e. complete and partial classification. Complete classification entails the induction of a classification model to discriminate between overall dissatisfied and overall satisfied instances, where customers at risk are defined as overall satisfied customers who are classified as overall dissatisfied. Partial classification entails the induction of the most prevalent characteristics of overall dissatisfied customers in order to discover overall satisfied customers who match these characteristics. In our empirical work, we evaluate complete and partial classification techniques and compare their performance on both quantitative and qualitative criteria. The intent of the paper is not on proving the superiority of partial classification, but rather to provide an alternative and valuable approach that offers new and different insights. In fact, taking predictive accuracy as the performance criterion, results for this study show the superiority of the complete classification approach. On the other hand, partial classification offers additional insights that complete classification techniques do not offer, i.e. it offers a rulebased description of criteria that lead to dissatisfaction for locally dense regions in the multidimensional instance space.