Results 11 
16 of
16
A Framework for Constructing Probability Distributions on the Space of Image Segmentations
 Computer Vision and Image Understanding
, 1995
"... The goal of traditional probabilistic approaches to image segmentation has been to derive a single, optimal segmentation, given statistical models for the image formation process. In this paper, we describe a new probabilistic approach to segmentation, in which the goal is to derive a set of plau ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
The goal of traditional probabilistic approaches to image segmentation has been to derive a single, optimal segmentation, given statistical models for the image formation process. In this paper, we describe a new probabilistic approach to segmentation, in which the goal is to derive a set of plausible segmentation hypotheses and their corresponding probabilities. Because the space of possible image segmentations is too large to represent explicitly, we present a representation scheme that allows the implicit representation of large sets of segmentation hypotheses that have low probability. We then derive a probabilistic mechanism for applying Bayesian, modelbased evidence to guide the construction of this representation. One key to our approach is a general Bayesian method for determining the posterior probability that the union of regions is homogeneous, given that the individual regions are homogeneous. This method does not rely on estimation, and properly treats the issu...
Methods for Numerical Integration of HighDimensional Posterior Densities with Application to Statistical Image Models
 In Proc. of the SPIE Conf. on Stochastic Methods in Signal Processing, Image Processing, and Computer Vision
, 1993
"... Numerical computation with Bayesian posterior densities has recently received much attention both in the applied statistics and image processing communities. This paper surveys previous literature and presents new, efficient methods for computing marginal density values for image models that have be ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Numerical computation with Bayesian posterior densities has recently received much attention both in the applied statistics and image processing communities. This paper surveys previous literature and presents new, efficient methods for computing marginal density values for image models that have been widely considered in computer vision and image processing. The particular models chosen are a Markov random field formulation, implicit polynomial surface models, and parametric polynomial surface models. The computations can be used to make a variety of statisticallybased decisions, such as assessing region homogeneity for segmentation, or performing model selection. Detailed descriptions of the methods are provided, along with demonstrative experiments on real imagery. 1 1 Introduction Bayesian analysis has proven to be a powerful tool in many lowlevel computer vision and image processing applications; however, in many instances this tool is limited by computational requirements im...
Testing Symmetry in Nonparametric Regression Models
"... In a recent paper Ahmad and Li (1996) proposed a new test for symmetry of the error distribution in linear regression models and proved asymptotic normality for the distribution of the corresponding test statistic under the null hypothesis and consistency under xed alternatives. The present paper ha ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In a recent paper Ahmad and Li (1996) proposed a new test for symmetry of the error distribution in linear regression models and proved asymptotic normality for the distribution of the corresponding test statistic under the null hypothesis and consistency under xed alternatives. The present paper has three purposes. On the one hand we derive the asymptotic distribution of the statistic considered by Ahmad and Li (1996) under xed alternatives and demonstrate that asymptotic normality is still valid but with a dierent rate of convergence. On the other hand we generalize Ahmad and Li's (1996) test of a symmetric error distribution to general nonparametric regression models. Moreover, it is also demonstrated that a bootstrap version of the new test for symmetry has good nite sample properties. AMS Classication: Primary 62G05 Keywords and Phrases: nonparametric regression, goodnessoft, test of symmetry, wild bootstrap 1 1
Nr. 18/2009Nonparametric analysis of covariance using quantile curves
, 2009
"... Nonparametric analysis of covariance using quantile curves ..."
Departamento de Estatística,
"... C. Pereira and J. Stern have recently introduced a measure of evidence of a precise hypothesis consisting of the posterior probability of the set of points having smaller density than the supremum over the hypothesis. The related procedure is seen to be a Bayes test for specific loss functions. The ..."
Abstract
 Add to MetaCart
C. Pereira and J. Stern have recently introduced a measure of evidence of a precise hypothesis consisting of the posterior probability of the set of points having smaller density than the supremum over the hypothesis. The related procedure is seen to be a Bayes test for specific loss functions. The nature of such loss functions and their relation to stylised inference problems are investigated. The dependence of the loss function on the sample is also discussed as well as the consequence of the introduction of Jeffreys’s prior mass for the precise hypothesis on the separability of probability and utility.
The Subjective Aspect of Probability
"... Subjectivity is an integral aspect of all applications of probability. This chapter demonstrates this by showing how the unified informal story of probability, in which a spectator's beliefs in certain events match their frequencies, can be used to understand elementary examples of statistical testi ..."
Abstract
 Add to MetaCart
Subjectivity is an integral aspect of all applications of probability. This chapter demonstrates this by showing how the unified informal story of probability, in which a spectator's beliefs in certain events match their frequencies, can be used to understand elementary examples of statistical testing. Is subjective probability a kind of probability, corresponding to a particular interpretation of the mathematical calculus of probability? Or is subjectivity always an integral aspect of probability, even in applications such as statistical testing, where the objective aspects of probability are usually emphasized? In this chapter, I argue that subjectivity is an aspect of all applications of probability. When we enunciate clearly the subjective aspects of supposedly objectivistic applications, the subjectivist critique of these applications loses its force. It is not necessary that these applications be rejected or be replaced with more complicated Bayesian procedures. It is only necessary that they be properly understood. When we learn the mathematics of probability, we learn an informal story in which belief and frequency are unified. This story has many variations, but it usually involves a sequence of experiments in which known odds simultaneously