Results 11  20
of
36
A Framework for Constructing Probability Distributions on the Space of Image Segmentations
 Computer Vision and Image Understanding
, 1995
"... The goal of traditional probabilistic approaches to image segmentation has been to derive a single, optimal segmentation, given statistical models for the image formation process. In this paper, we describe a new probabilistic approach to segmentation, in which the goal is to derive a set of plau ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
The goal of traditional probabilistic approaches to image segmentation has been to derive a single, optimal segmentation, given statistical models for the image formation process. In this paper, we describe a new probabilistic approach to segmentation, in which the goal is to derive a set of plausible segmentation hypotheses and their corresponding probabilities. Because the space of possible image segmentations is too large to represent explicitly, we present a representation scheme that allows the implicit representation of large sets of segmentation hypotheses that have low probability. We then derive a probabilistic mechanism for applying Bayesian, modelbased evidence to guide the construction of this representation. One key to our approach is a general Bayesian method for determining the posterior probability that the union of regions is homogeneous, given that the individual regions are homogeneous. This method does not rely on estimation, and properly treats the issu...
Methods for Numerical Integration of HighDimensional Posterior Densities with Application to Statistical Image Models
"... Numerical computation with Bayesian posterior densities has recently received much attention both in the applied statistics and image processing communities. This paper surveys previous literature and presents new, efficient methods for computing marginal density values for image models that have ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
Numerical computation with Bayesian posterior densities has recently received much attention both in the applied statistics and image processing communities. This paper surveys previous literature and presents new, efficient methods for computing marginal density values for image models that have been widely considered in computer vision and image processing. The particular models chosen are a Markov random field formulation, implicit polynomial surface models, and parametric polynomial surface models. The computations can be used to make a variety of statisticallybased decisions, such as assessing region homogeneity for segmentation, or performing model selection. Detailed descriptions of the methods are provided, along with demonstrative experiments on real imagery.
Streaky Hitting in Baseball
"... The streaky hitting patterns of all regular baseball players during the 2005 season are explored. Patterns of hits/outs, home runs and strikeouts are considered using different measures of streakiness. An adjustment method is proposed that helps in understanding the size of a streakiness measure giv ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The streaky hitting patterns of all regular baseball players during the 2005 season are explored. Patterns of hits/outs, home runs and strikeouts are considered using different measures of streakiness. An adjustment method is proposed that helps in understanding the size of a streakiness measure given the player’s ability and number of hitting opportunities. An exchangeable model is used to estimate the hitting abilities of all players and this model is used to understand the pattern of streakiness of all players in the 2005 season. This exchangeable model that assumes that all players are consistent with constant probabilities of success appears to explain much of the observed streaky behavior. But there are some players that appear to exhibit more streakiness than one would predict from the model.
Modeling Macro Political Dynamics
, 2006
"... Analyzing macropolitical processes is complicated by four interrelated problems: model scale, endogeneity, persistence, and specification uncertainty. These problems are endemic in the study of political economy, public opinion, international relations, and other kinds of macropolitical research. ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Analyzing macropolitical processes is complicated by four interrelated problems: model scale, endogeneity, persistence, and specification uncertainty. These problems are endemic in the study of political economy, public opinion, international relations, and other kinds of macropolitical research. We show how a Bayesian structural time series approach addresses them. Our illustration is a structurally identified, nine equation model of the U.S. politicaleconomic system. It combines key features of Erikson, MacKuen and Stimson’s model of the American macropolity (2002) with those of a leading macroeconomic model of U.S. (Sims and Zha 1998 and Leeper, Sims, and Zha 1996). This structural model, with a loose informed prior, yields the best performance in terms of a mean squared error loss criterion and new insights into the dynamics of the American political economy. The model 1) captures the conventional wisdom about the countercyclical nature of monetary policy (Williams 1990) 2) reveals informational sources of approval dynamics: innovations in information variables affect consumer sentiment and approval and the impacts on consumer sentiment feedforward into subsequent approval changes, 3) finds that the real economy does not have any major impacts on key macropolity variables and 4) concludes that macropartisanship does not depend on the evolution of the real economy in the short or medium term and only very weakly on informational variables in the long term.
A Bayesian Joinpoint Regression model with an unknown number of break points. 2011; http://www.uv.es/mamtnez/preprints/joinpoint.pdf
, 2011
"... Abstract: Joinpoint regression is used to determine the number of segments needed to adequately explain the relationship between two variables. This methodology can be widely applied to real problems but we focus on epidemiological data, the main goal being to uncover changes in the mortality time ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract: Joinpoint regression is used to determine the number of segments needed to adequately explain the relationship between two variables. This methodology can be widely applied to real problems but we focus on epidemiological data, the main goal being to uncover changes in the mortality time trend of a specific disease under study. Traditionally, joinpoint regression problems have paid little or no attention to the quantification of uncertainty in the estimation of the number of changepoints. In this context, we found a satisfactory way to handle the problem in the Bayesian methodology. Nevertheless, this novel approach involves significant difficulties (both theoretical and practical) since it implicitly entails a model selection (or testing) problem. In this study, we face these challenges through i) a novel reparameterization of the model; ii) a conscientious definition of the prior distributions used and iii) an encompassing approach which allows the use of MCMC simulationbased techniques to derive the results. The resulting methodology is flexible enough to make it possible to consider mortality counts (for epidemiological applications) as Poisson variables. The methodology is applied to the study of annual breast cancer mortality during the period 19802007 in Castellón, a province in Spain.
Objective Priors for Model Selection in OneWay Random Effects Models.” Submitted to The Canadian journal of Statistics
, 2005
"... It is broadly accepted that the Bayes factor is a key tool in model selection. Nevertheless, it is an important, difficult and still open question which priors should be used to develop objective (or default) Bayes factors. We consider this problem in the context of the oneway random effects model. ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
It is broadly accepted that the Bayes factor is a key tool in model selection. Nevertheless, it is an important, difficult and still open question which priors should be used to develop objective (or default) Bayes factors. We consider this problem in the context of the oneway random effects model. Arguments based on concepts like orthogonality, matching predictive, and invariance are used to justify a specific form of the priors, in which the (proper) prior for the new parameter (using Jeffreys ’ terminology) has to be determined. Two different proposals for this proper prior have been derived: the intrinsic priors and the divergence based priors, a recently proposed methodology. It will be seen that the divergence based priors produce consistent Bayes factors. The methods are illustrated on examples and compared with other proposals. Finally, the divergence based priors and the associated Bayes factor are derived for the unbalanced case.
Calibrating Bayes factor under prior predictive distributions
 Statistica Sinica
, 2005
"... Abstract: The Bayes factor is a popular criterion in Bayesian model selection. Due to the lack of symmetry of the prior predictive distribution of Bayes factor across models, the scale of evidence in favor of one model against another constructed based solely on the observed value of the Bayes facto ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract: The Bayes factor is a popular criterion in Bayesian model selection. Due to the lack of symmetry of the prior predictive distribution of Bayes factor across models, the scale of evidence in favor of one model against another constructed based solely on the observed value of the Bayes factor is thus inappropriate. To overcome this problem, a novel calibrating value of the Bayes factor based on the prior predictive distributions and the decision rule based on this calibrating value for selecting the model are proposed. We further show that the proposed decision rule based on the calibration distribution is equivalent to the surprisebased decision. That is, we choose the model for which the observed Bayes factor is less surprising. Moreover, we demonstrate that the decision rule based on the calibrating value is closely related to the classical rejection region for a standard hypothesis testing problem. An ecient Monte Carlo method is proposed for computing the calibrating value. In addition, we carefully examine the robustness of the decision rule based on the calibration distribution to the choice of imprecise priors under both nested and nonnested models. A data set is used to further illustrate the proposed methodology and several important extensions are also discussed. Key words and phrases: Calibrating value, critical value, hypothesis testing, imprecise prior, L measure, model selection, Monte Carlo, posterior model probability, pseudoBayes factor, Pvalue. 1.
Generalization of Jeffreys’ Divergence Based Priors for Bayesian Hypothesis testing
, 2008
"... In this paper we introduce objective proper prior distributions for hypothesis testing and model selection based on measures of divergence between the competing models; we call them divergence based (DB) priors. DB priors have simple forms and desirable properties, like information (finite sample) c ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
In this paper we introduce objective proper prior distributions for hypothesis testing and model selection based on measures of divergence between the competing models; we call them divergence based (DB) priors. DB priors have simple forms and desirable properties, like information (finite sample) consistency; often, they are similar to other existing proposals like the intrinsic priors; moreover, in normal linear models scenarios, they exactly reproduce JeffreysZellnerSiow priors. Most importantly, in challenging scenarios such as irregular models and mixture models, the DB priors are well defined and very reasonable, while alternative proposals are not. We derive approximations to the DB priors as well as MCMC and asymptotic expressions for the associated Bayes factors.
Objective Bayes testing of Poisson versus inflated Poisson models ∗ Contents
, 2008
"... Abstract: The Poisson distribution is often used as a standard model for count data. Quite often, however, such data sets are not well fit by a Poisson ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: The Poisson distribution is often used as a standard model for count data. Quite often, however, such data sets are not well fit by a Poisson