Results 1  10
of
93
Assessment and Propagation of Model Uncertainty
, 1995
"... this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the ..."
Abstract

Cited by 221 (0 self)
 Add to MetaCart
this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the chance of catastrophic failure of the U.S. Space Shuttle.
The Consistency of Posterior Distributions in Nonparametric Problems
 Ann. Statist
, 1996
"... We give conditions that guarantee that the posterior probability of every Hellinger... ..."
Abstract

Cited by 132 (4 self)
 Add to MetaCart
We give conditions that guarantee that the posterior probability of every Hellinger...
Multiscale Modeling and Estimation of Poisson Processes with Application to Photonlimited Imaging
 IEEE TRANS. ON INFO. THEORY
, 1999
"... Many important problems in engineering and science are wellmodeled by Poisson processes. In many applications it is of great interest to accurately estimate the intensities underlying observed Poisson data. In particular, this work is motivated by photonlimited imaging problems. This paper studies ..."
Abstract

Cited by 72 (10 self)
 Add to MetaCart
Many important problems in engineering and science are wellmodeled by Poisson processes. In many applications it is of great interest to accurately estimate the intensities underlying observed Poisson data. In particular, this work is motivated by photonlimited imaging problems. This paper studies a new Bayesian approach to Poisson intensity estimation based on the Haar wavelet transform. It is shown that the Haar transform provides a very natural and powerful framework for this problem. Using this framework, a novel multiscale Bayesian prior to model intensity functions is devised. The new prior leads to a simple, Bayesian intensity estimation procedure. Furthermore, we characterize the correlation behavior of the new prior and show that it has 1/f spectral characteristics. The new framework is applied to photonlimited image estimation and its potential to improve nuclear medicine imaging is examined.
A method for combining inference across related nonparametric Bayesian models
, 2004
"... We consider the problem of combining inference in related nonparametric Bayes models. Analogous to parametric hierarchical models, the hierarchical extension formalizes borrowing strength across the related submodels. In the nonparametric context, modelling is complicated by the fact that the rando ..."
Abstract

Cited by 57 (2 self)
 Add to MetaCart
We consider the problem of combining inference in related nonparametric Bayes models. Analogous to parametric hierarchical models, the hierarchical extension formalizes borrowing strength across the related submodels. In the nonparametric context, modelling is complicated by the fact that the random quantities over which we define the hierarchy are infinite dimensional.We discuss a formal definition of such a hierarchical model.The approach includes a regression at the level of the nonparametric model. For the special case of Dirichlet process mixtures, we develop a Markov chain Monte Carlo scheme to allow efficient implementation of full posterior inference in the given model.
A Statistical Multiscale Framework for Poisson Inverse Problems
, 2000
"... This paper describes a statistical modeling and analysis method for linear inverse problems involving Poisson data based on a novel multiscale framework. The framework itself is founded upon a multiscale analysis associated with recursive partitioning of the underlying intensity, a corresponding ..."
Abstract

Cited by 53 (4 self)
 Add to MetaCart
This paper describes a statistical modeling and analysis method for linear inverse problems involving Poisson data based on a novel multiscale framework. The framework itself is founded upon a multiscale analysis associated with recursive partitioning of the underlying intensity, a corresponding multiscale factorization of the likelihood (induced by this analysis), and a choice of prior probability distribution made to match this factorization by modeling the \splits" in the underlying partition. The class of priors used here has the interesting feature that the \noninformative" member yields the traditional maximum likelihood solution; other choices are made to reect prior belief as to the smoothness of the unknown intensity. Adopting the expectationmaximization (EM) algorithm for use in computing the MAP estimate corresponding to our model, we nd that our model permits remarkably simple, closedform expressions for the EM update equations. The behavior of our EM algorit...
Modeling Regression Error with a Mixture of Polya Trees
 Journal of the American Statistical Association
, 2001
"... We model the error distribution in the standard linear model as a mixture of absolutely continuous Polya trees constrained to have median zero. By considering a mixture, we smooth out the partitioning e ects of a simple Polya tree and the predictive error density has a derivative everywhere except z ..."
Abstract

Cited by 33 (5 self)
 Add to MetaCart
We model the error distribution in the standard linear model as a mixture of absolutely continuous Polya trees constrained to have median zero. By considering a mixture, we smooth out the partitioning e ects of a simple Polya tree and the predictive error density has a derivative everywhere except zero. The error distribution is centered around a standard parametric family of distributions and may therefore be viewed as a generalization of standard models in which important, datadriven features, such as skewness and multimodality, are allowed. By marginalizing the Polya tree exact inference is possible up to MCMC error.
Consistency issues in Bayesian Nonparametrics
 IN ASYMPTOTICS, NONPARAMETRICS AND TIME SERIES: A TRIBUTE
, 1998
"... ..."
Kullback Leibler property of kernel mixture priors in Bayesian density estimation
 Electronic J. Statist
, 2008
"... Positivity of the prior probability of KullbackLeibler neighborhood around the true density, commonly known as the KullbackLeibler property, plays a fundamental role in posterior consistency. A popular prior for Bayesian estimation is given by a Dirichlet mixture, where the kernels are chosen depe ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
Positivity of the prior probability of KullbackLeibler neighborhood around the true density, commonly known as the KullbackLeibler property, plays a fundamental role in posterior consistency. A popular prior for Bayesian estimation is given by a Dirichlet mixture, where the kernels are chosen depending on the sample space and the class of densities to be estimated. The KullbackLeibler property of the Dirichlet mixture prior has been shown for some special kernels like the normal density or Bernstein polynomial, under appropriate conditions. In this paper, we obtain easily verifiable sufficient conditions, under which a prior obtained by mixing a general kernel possesses the KullbackLeibler property. We study a wide variety of kernel used in practice, including the normal, t, histogram, Weibull densities and so on, and show that the KullbackLeibler property holds if some easily verifiable conditions are satisfied at the true density. This gives a catalog of conditions required for the KullbackLeibler property, which can be readily used in applications. AMS (2000) subject classification. Primary 62G07, 62G20.
Statistical notions of data disclosure avoidance and their relationship to traditional statistical methodology: Data swapping and loglinear models
 Proc. Bureau of the Census
, 1996
"... For most data releases especially those from censuses, the U. S. Bureau of the Census has either released data at high levels of aggregation or applied a data disclosure avoidance procedure such as data swapping or cell suppression before preparing microdata or tables for release. In this paper, we ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
(Show Context)
For most data releases especially those from censuses, the U. S. Bureau of the Census has either released data at high levels of aggregation or applied a data disclosure avoidance procedure such as data swapping or cell suppression before preparing microdata or tables for release. In this paper, we present a general statistical characterization of the goal of a statistical agency in releasing confidential data subject to the application of disclosure avoidance procedures. We use this characterization to provide a framework for the study of data disclosure avoidance procedures for categorical variables. Consider a sample of n observations on p variables, which may be discrete or continuous. Our general characterization is in terms of the smoothing of a multidimensional empirical distribution function (an ordered version of the data), and sampling from it using bootstraplike selection. Both the smoothing and the sampling introduce alterations to the data and thus a bootstrap sample will not necessarily be the same as the original sample this works to preserve the confidentiality of individuals providing the original data. Two obvious questions are: How well confidentiality is preserved by such a process? Have the smoothing and sampling disguised fundamental relationships among the p variables of interest to others who will work only with the altered data? Rubin (1993) has provided a closely related characterization and approach based on multiple imputation. We explain some of these ideas in greater detail in the context of categorical random variables and compare them to methods in current use for data disclosure avoidance such as data swapping and cell suppression. We also relate this approach the data disclosure avoidance methods to statistical analysis associated with the use of loglinear models for crossclassified categorical data.