Results 1  10
of
16
Bayesian Interpolation
 Neural Computation
, 1991
"... Although Bayesian analysis has been in use since Laplace, the Bayesian method of modelcomparison has only recently been developed in depth. In this paper, the Bayesian approach to regularisation and modelcomparison is demonstrated by studying the inference problem of interpolating noisy data. T ..."
Abstract

Cited by 721 (17 self)
 Add to MetaCart
(Show Context)
Although Bayesian analysis has been in use since Laplace, the Bayesian method of modelcomparison has only recently been developed in depth. In this paper, the Bayesian approach to regularisation and modelcomparison is demonstrated by studying the inference problem of interpolating noisy data. The concepts and methods described are quite general and can be applied to many other problems. Regularising constants are set by examining their posterior probability distribution. Alternative regularisers (priors) and alternative basis sets are objectively compared by evaluating the evidence for them. `Occam's razor' is automatically embodied by this framework. The way in which Bayes infers the values of regularising constants and noise levels has an elegant interpretation in terms of the effective number of parameters determined by the data set. This framework is due to Gull and Skilling. 1 Data modelling and Occam's razor In science, a central task is to develop and compare models to a...
Informationtheoretic asymptotics of Bayes methods
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 1990
"... In the absence of knowledge of the true density function, Bayesian models take the joint density function for a sequence of n random variables to be an average of densities with respect to a prior. We examine the relative entropy distance D,, between the true density and the Bayesian density and sh ..."
Abstract

Cited by 142 (12 self)
 Add to MetaCart
In the absence of knowledge of the true density function, Bayesian models take the joint density function for a sequence of n random variables to be an average of densities with respect to a prior. We examine the relative entropy distance D,, between the true density and the Bayesian density and show that the asymptotic distance is (d/2Xlogn)+ c, where d is the dimension of the parameter vector. Therefore, the relative entropy rate D,,/n converges to zero at rate (logn)/n. The constant c, which we explicitly identify, depends only on the prior density function and the Fisher information matrix evaluated at the true parameter value. Consequences are given for density estimation, universal data compression, composite hypothesis testing, and stockmarket portfolio selection.
Dutch book in simple multivariate normal prediction: Another look
"... Abstract: In this expository paper we describe a relatively elementary method of establishing the existence of a Dutch book in a simple multivariate normal prediction setting. The method involves deriving a nonstandard predictive distribution that is motivated by invariance. This predictive distribu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract: In this expository paper we describe a relatively elementary method of establishing the existence of a Dutch book in a simple multivariate normal prediction setting. The method involves deriving a nonstandard predictive distribution that is motivated by invariance. This predictive distribution satisfies an interesting identity which in turn yields an elementary demonstration of the existence of a Dutch book for a variety of possible predictive distributions. 1.
UNCERTAINTY QUANTIFICATION AND INTEGRATION IN ENGINEERING SYSTEMS By
, 2012
"... First and foremost, I express my heartfelt gratitude to my advisor Prof. Sankaran Mahadevan, for his expert professional advice and guidance throughout my graduate studies at Vanderbilt University. His constant encouragement has been a great source of motivation not only in my academic career but al ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
First and foremost, I express my heartfelt gratitude to my advisor Prof. Sankaran Mahadevan, for his expert professional advice and guidance throughout my graduate studies at Vanderbilt University. His constant encouragement has been a great source of motivation not only in my academic career but also in my personal life. I am extremely indebted to him, for all that I have learnt during my graduate education, and look forward to continuing this relationship for the rest of my professional career. I would also like to acknowledge the help and support I received from my committee members Prof. Prodyot K. Basu, Prof. Gautam Biswas, Prof. Bruce Cooil, and Prof. Mark P. McDonald, who provided useful comments and suggestions during this research. I express my sincere thanks to the various sponsor agencies that funded
Poincaré’s Odds
, 2013
"... Abstract. This paper is devoted to Poincaré’s work in probability. Though the subject does not represent a large part of the mathematician’s achievements, it provides significant insight into the evolution of Poincaré’s thought on several important matters such as the changes in physics implied by s ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This paper is devoted to Poincaré’s work in probability. Though the subject does not represent a large part of the mathematician’s achievements, it provides significant insight into the evolution of Poincaré’s thought on several important matters such as the changes in physics implied by statistical mechanics and molecular theories. After having drawn the general historical context of this evolution, I focus on several important steps in Poincaré’s texts dealing with probability theory, and eventually consider how his legacy was developed by the next generation.
Fast Estimation of Expected Information Gains for Bayesian Experimental Designs Based on Laplace Approximations
"... Shannontype expected information gain can be used to evaluate the relevance of a proposed experiment subjected to uncertainty. The estimation of such gain, however, relies on a doubleloop integration. Moreover, its numerical integration in multidimensional cases, e.g., when using Monte Carlo samp ..."
Abstract
 Add to MetaCart
(Show Context)
Shannontype expected information gain can be used to evaluate the relevance of a proposed experiment subjected to uncertainty. The estimation of such gain, however, relies on a doubleloop integration. Moreover, its numerical integration in multidimensional cases, e.g., when using Monte Carlo sampling methods, is therefore computationally too expensive for realistic physical models, especially for those involving the solution of partial differential equations. In this work, we present a new methodology, based on the Laplace approximation for the integration of the posterior probability density function (pdf), to accelerate the estimation of the expected information gains in the model parameters and predictive quantities of interest. We obtain a closed–form approximation of the inner integral and the corresponding dominant error term in the cases where parameters are determined by the experiment, such that only a single–loop integration is needed to carry out the estimation of the expected information gain. To deal with the issue of dimensionality in a complex problem, we use a sparse quadrature for the integration over the prior pdf. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear numerical examples, including the designs of the scalar parameter in an one–dimensional cubic polynomial function, the design of the same scalar in a modified function with two indistinguishable parameters, the resolution width and measurement time for a blurred single peak spectrum, and the boundary source locations for impedance tomography in a square domain.
A Laplace Method for UnderDetermined Bayesian Optimal Experimental Designs
"... In [1], a new method based on the Laplace approximation was developed to accelerate the estimation of the postexperimental expected information gains (KullbackLeibler divergence) in model parameters and predictive quantities of interest in the Bayesian framework. A closedform asymptotic approxim ..."
Abstract
 Add to MetaCart
In [1], a new method based on the Laplace approximation was developed to accelerate the estimation of the postexperimental expected information gains (KullbackLeibler divergence) in model parameters and predictive quantities of interest in the Bayesian framework. A closedform asymptotic approximation of the inner integral and the order of the corresponding dominant error term were obtained in the cases where the parameters are determined by the experiment. In this work, we extend that method to the general case where the model parameters can not be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the Jacobian matrix of the data model with respect to the parameters, so that the information gain can be reduced to an integration against the marginal density of the transformed parameters that are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the posterior covariance matrix projected over the aforementioned orthogonal directions. To deal with the issue of dimensionality in a complex problem, we use either
Expectations and Variances of Nonpositive Functions
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract
 Add to MetaCart
(Show Context)
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
T homas Bayes (1701–1761), shown in
"... the upper left corner of Figure 1, first discovered Bayes ’ theorem in a paper that was published in 1764 three years after his death, as the name suggests. However, Bayes, in his theorem, used uniform priors [1]. PierreSimon Laplace (1749–1827), shown in the lower right corner of Figure 1, apparen ..."
Abstract
 Add to MetaCart
(Show Context)
the upper left corner of Figure 1, first discovered Bayes ’ theorem in a paper that was published in 1764 three years after his death, as the name suggests. However, Bayes, in his theorem, used uniform priors [1]. PierreSimon Laplace (1749–1827), shown in the lower right corner of Figure 1, apparently unaware of Bayes ’ work, discovered the same theorem in more general form in a memoir he wrote at the age of 25 and showed its wide applicability [2]. Regarding these issues S.M. Stiegler writes: The influence of this memoir was immense. It was from here that “Bayesian ” ideas first spread through the mathematical world, as Bayes’s own article was ignored until 1780 and played no important role in scientific debate until the 20th century. It was also this article of Laplace’s that introduced the mathematical techniques for the asymptotic analysis of posterior distributions that are still employed today. And it was here that the earliest example of optimum estimation can be found, the derivation and characterization of an estimator that minimized a particular measure of posterior expected loss. After more than two centuries, we mathematicians, statisticians cannot only recognize our roots in this masterpiece of our science, we can still learn from it. [3]