Results 11  20
of
22
A Framework for Constructing Probability Distributions on the Space of Image Segmentations
 Computer Vision and Image Understanding
, 1995
"... The goal of traditional probabilistic approaches to image segmentation has been to derive a single, optimal segmentation, given statistical models for the image formation process. In this paper, we describe a new probabilistic approach to segmentation, in which the goal is to derive a set of plau ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
The goal of traditional probabilistic approaches to image segmentation has been to derive a single, optimal segmentation, given statistical models for the image formation process. In this paper, we describe a new probabilistic approach to segmentation, in which the goal is to derive a set of plausible segmentation hypotheses and their corresponding probabilities. Because the space of possible image segmentations is too large to represent explicitly, we present a representation scheme that allows the implicit representation of large sets of segmentation hypotheses that have low probability. We then derive a probabilistic mechanism for applying Bayesian, modelbased evidence to guide the construction of this representation. One key to our approach is a general Bayesian method for determining the posterior probability that the union of regions is homogeneous, given that the individual regions are homogeneous. This method does not rely on estimation, and properly treats the issu...
Methods for Numerical Integration of HighDimensional Posterior Densities with Application to Statistical Image Models
 In Proc. of the SPIE Conf. on Stochastic Methods in Signal Processing, Image Processing, and Computer Vision
, 1993
"... Numerical computation with Bayesian posterior densities has recently received much attention both in the applied statistics and image processing communities. This paper surveys previous literature and presents new, efficient methods for computing marginal density values for image models that have be ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Numerical computation with Bayesian posterior densities has recently received much attention both in the applied statistics and image processing communities. This paper surveys previous literature and presents new, efficient methods for computing marginal density values for image models that have been widely considered in computer vision and image processing. The particular models chosen are a Markov random field formulation, implicit polynomial surface models, and parametric polynomial surface models. The computations can be used to make a variety of statisticallybased decisions, such as assessing region homogeneity for segmentation, or performing model selection. Detailed descriptions of the methods are provided, along with demonstrative experiments on real imagery. 1 1 Introduction Bayesian analysis has proven to be a powerful tool in many lowlevel computer vision and image processing applications; however, in many instances this tool is limited by computational requirements im...
Streaky Hitting in Baseball
"... The streaky hitting patterns of all regular baseball players during the 2005 season are explored. Patterns of hits/outs, home runs and strikeouts are considered using different measures of streakiness. An adjustment method is proposed that helps in understanding the size of a streakiness measure giv ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The streaky hitting patterns of all regular baseball players during the 2005 season are explored. Patterns of hits/outs, home runs and strikeouts are considered using different measures of streakiness. An adjustment method is proposed that helps in understanding the size of a streakiness measure given the player’s ability and number of hitting opportunities. An exchangeable model is used to estimate the hitting abilities of all players and this model is used to understand the pattern of streakiness of all players in the 2005 season. This exchangeable model that assumes that all players are consistent with constant probabilities of success appears to explain much of the observed streaky behavior. But there are some players that appear to exhibit more streakiness than one would predict from the model.
Model Discrimination in MetaAnalysis  A Bayesian Perspective
"... In wanting to summarise evidence from a number of studies a variety of statistical methods have been proposed. Of these the most widely used is the socalled fixed effect model in which the individual studies are estimating a single, but unknown, overall population effect. When there is `considerabl ..."
Abstract
 Add to MetaCart
In wanting to summarise evidence from a number of studies a variety of statistical methods have been proposed. Of these the most widely used is the socalled fixed effect model in which the individual studies are estimating a single, but unknown, overall population effect. When there is `considerable' heterogeneity, in terms of the effect sizes, between the studies the use of a random effect model has been advocated in which each individual study is assumed to be estimating its own, unknown, true effect. Discrimination between fixed and random effect models has been advocated by means of a Ø 2 test for heterogeneity, which it is accepted has low statistical power. Recent interest has been shown in the use of Bayes Factors as an alternative. The use of Bayes factors is illustrated using a number of previously published metaanalyses in which there are varying degrees of heterogeneity. It is shown how the use of Bayes Factors leads to a more intuitive assessment of the evidence in favo...
Generalization of Jeffreys ’ Divergence Based Priors for Bayesian Hypothesis testing
, 2008
"... In this paper we introduce objective proper prior distributions for hypothesis testing and model selection based on measures of divergence between the competing models; we call them divergence based (DB) priors. DB priors have simple forms and desirable properties, like information (finite sample) c ..."
Abstract
 Add to MetaCart
In this paper we introduce objective proper prior distributions for hypothesis testing and model selection based on measures of divergence between the competing models; we call them divergence based (DB) priors. DB priors have simple forms and desirable properties, like information (finite sample) consistency; often, they are similar to other existing proposals like the intrinsic priors; moreover, in normal linear models scenarios, they exactly reproduce JeffreysZellnerSiow priors. Most importantly, in challenging scenarios such as irregular models and mixture models, the DB priors are well defined and very reasonable, while alternative proposals are not. We derive approximations to the DB priors as well as MCMC and asymptotic expressions for the associated Bayes factors.
Approximate Bayesian Inference for Multivariate Stochastic Volatility Models
, 2008
"... In this report we apply Integrated Nested Laplace approximation (INLA) to a series of multivariate stochastic volatility models. These are a useful construct in financial time series analysis and can be formulated as latent Gaussian Markov Random Field (GMRF) models. This popular class of models is ..."
Abstract
 Add to MetaCart
In this report we apply Integrated Nested Laplace approximation (INLA) to a series of multivariate stochastic volatility models. These are a useful construct in financial time series analysis and can be formulated as latent Gaussian Markov Random Field (GMRF) models. This popular class of models is characterised by a GMRF as the second stage of the hierarchical structure and a vector of hyperparameters as the third stage. INLA is a new tool for fast, deterministic inference on latent GMRF models which provides very accurate approximations to the posterior marginals of the model. We compare the performance of INLA with that of some Markov Chain Monte Carlo (MCMC) algorithms run for a long time showing that the approximations, despite being computed in only a fraction of time with respect to MCMC estimations, are practically exact. The INLA approach uses numerical schemes to integrate out the uncertainty with respect to the hyperparameters. In this report we cope with problems deriving from an increasing dimension of the hyperparameter vector. Moreover, we propose different approximations for the posterior marginals of the hyperparameters of the model. We show also how Bayes factors can be efficiently approximated using the INLA tools thus providing a base for model comparison.
Objective Bayes testing of Poisson versus inflated Poisson models ∗ Contents
, 2008
"... Abstract: The Poisson distribution is often used as a standard model for count data. Quite often, however, such data sets are not well fit by a Poisson ..."
Abstract
 Add to MetaCart
Abstract: The Poisson distribution is often used as a standard model for count data. Quite often, however, such data sets are not well fit by a Poisson
Multimodel Identification of Group Structure in Network Data ∗
, 2008
"... This article proposes a method of identifying the number of groups implied by the pattern of ties in a network based on BICcat—an extension of the Bayesian Information Criterion (BIC). The proposed extension is based on a set of assumptions that derive from specific characteristics of the statistica ..."
Abstract
 Add to MetaCart
This article proposes a method of identifying the number of groups implied by the pattern of ties in a network based on BICcat—an extension of the Bayesian Information Criterion (BIC). The proposed extension is based on a set of assumptions that derive from specific characteristics of the statistical evaluation of group structures in networks that diverge from the set of assumptions that underpin most largescale regressionstyle empirical social science research. I use a simulation of randomly generated networks from the pairdependent stochastic blockmodel (Anderson et al. 1992) and the p1 stochastic blockmodel distribution (Wang and Wong 1987), along with a multimodel inference technique (Burnham and Anderson 2004) to demonstrate that BICcat produces less biased estimates of the number of groups implied by the pattern of ties in a network than does BIC. 1