Results 1 
5 of
5
Approximate Bayesian computation: A nonparametric perspective
 Journal of the American Statistical Association
, 2010
"... Approximate Bayesian Computation is a family of likelihoodfree inference techniques that are wellsuited to models defined in terms of a stochastic generating mechanism. In a nutshell, Approximate Bayesian Computation proceeds by computing summary statistics sobs from the data and simulating synthe ..."
Abstract

Cited by 42 (3 self)
 Add to MetaCart
Approximate Bayesian Computation is a family of likelihoodfree inference techniques that are wellsuited to models defined in terms of a stochastic generating mechanism. In a nutshell, Approximate Bayesian Computation proceeds by computing summary statistics sobs from the data and simulating synthetic summary statistics for different values of the parameter Θ. The posterior distribution is then approximated by an estimator of the conditional density g(Θsobs). In this paper, we derive the asymptotic bias and variance of the standard estimators of the posterior distribution which are based on rejection sampling and linear adjustment. Additionally, we introduce an original estimator of the posterior distribution based on quadratic adjustment and we show that its bias contains a smaller number of terms than the estimator with linear adjustment. Although we find that the estimators with adjustment are not universally superior to the estimator based on rejection sampling, we find that they can achieve better performance when there is a nearly homoscedastic relationship between the summary statistics and the parameter of interest. Last, we present model selection in Approximate Bayesian Computation and provide asymptotic properties of two estimators of the model probabilities. As for parameter estimation, the asymptotic results raise the importance of the curse of dimensionality in Approximate Bayesian Computation. Performing numerical simulations in a simple normal model confirms that the estimators may be less efficient as the number of summary statistics increases. Supplemental materials containing the details of the proofs are available online.
Bayesian Model Robustness via Disparities
"... the NDCHealth Corporation. All computations were performed in R. This paper develops a methodology for robust Bayesian inference through the use of disparities. Metrics such as Hellinger distance and negative exponential disparity have a long history in robust estimation in frequentist inference. We ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
the NDCHealth Corporation. All computations were performed in R. This paper develops a methodology for robust Bayesian inference through the use of disparities. Metrics such as Hellinger distance and negative exponential disparity have a long history in robust estimation in frequentist inference. We demonstrate that an equivalent robustification may be made in Bayesian inference by substituting an appropriately scaled disparity for the log likelihood to which standard Monte Carlo Markov Chain methods may be applied. A particularly appealing property of minimumdisparity methods is that while they yield robustness, the resulting parameter estimates are also efficient when the posited probabilistic model is correct. We demonstrate that a similar property holds for disparitybased Bayesian inference. We further show that in the Bayesian setting, it is also possible to extend these methods to robustify regression models, random effects distributions and other hierarchical models. The methods are demonstrated on real world data. Keywords:
Consistency, Efficiency and Robustness of Conditional Disparity Methods
, 2011
"... This report demonstrates the consistency and asymptotic normality of minimumdisparity estimators based on conditional density estimates. In particular, it provides L1 consistency results for conditional density estimators and conditional density estimators under homoscedastic model restrictions. It ..."
Abstract
 Add to MetaCart
This report demonstrates the consistency and asymptotic normality of minimumdisparity estimators based on conditional density estimates. In particular, it provides L1 consistency results for conditional density estimators and conditional density estimators under homoscedastic model restrictions. It defines a generic formulation of disparity estimates in conditionallyspecified models and demonstrates their asymptotic consistency and normality. In particular, for regression models with more than one continuous response or covariate, we demonstrate that disparity estimators based on unrestricted conditional density estimates have a bias that is larger than √ n, however for univariate homoscedastic models conditioned on a univariate continuous covariate, this bias can be removed. 1 Framework and Assumptions Throughout the following, we assume that we observe {Xn1(ω), Xn2(ω), Yn1(ω), Yn2)(ω), n ≥ 1} i.i.d. random variables with where Xn1(ω) ∈ R dx, Xn2(ω) ∈ Sx, Yn1(ω) ∈ R dy, Yn2(ω) ∈ Sy for countable sets Sx and Sy with joint distribution g(x1, x2, y1, y2) = P (X2 = x2, Y2 = y2)P (X1 ∈ dx1, Y1 ∈ dy1x2, y2) and define the marginal and conditional densities h(x1, x2) = ∑ y2∈Sy f(y1, y2x1, x2) = g(x1, x2, y1, y2) h(x1, x2) g(x1, x2, y1, y2)dy1 on the support of (x1, x2). Further, for a scalar Yn1 ∈ R, removing Yn2 (or setting it to be always identically 0) we define the conditional expectation m(x1, x2) = y1f(y1x1, x2)dy1 and note that in homoscedastic regression models we write for some density f ∗ (e), f(y1x1, x2) = f ∗ (y1 − m(x1, x2)). (1.1) 1 Within this context, the use of disparity methods to estimate parameters in linear regression was treated in Pak and Basu (1998) from the point of view of placing a disparity on the score equations. We take a considerably more direct approach here. The following regularity structures may be assumed in the theorems below: (D1) g is bounded and continuous in x1 and y1. (D2) All third derivatives of g with respect to x1 and y1 exist, are continuous and bounded. (D3) The support of h, X ∈ R dx ⊗Sx, is compact and h − = inf (x1,x2)∈X h(x1, x2)> 0. We note that under these conditions, continuity of h and f in x1 and y1 is inherited from g. In the case of homoscedastic models we also assume
Kinematically Optimised Predictions of Object Motion
"... Abstract — Predicting the motions of rigid objects under contacts is a necessary precursor to planning of robot manipulation of objects. On the one hand physics based rigid body simulations are used, and on the other learning approaches are being developed. The advantage of physics simulations is t ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — Predicting the motions of rigid objects under contacts is a necessary precursor to planning of robot manipulation of objects. On the one hand physics based rigid body simulations are used, and on the other learning approaches are being developed. The advantage of physics simulations is that because they explicitly perform collision checking they respect kinematic constraints, producing physically plausible predictions. The advantage of learning approaches is that they can capture the effects on motion of unobservable parameters such as mass distribution, and frictional coefficients, thus producing more accurate predicted trajectories. This paper shows how to bring together the advantages of both approaches to achieve learned simulators of specific objects that outperform previous learning approaches. Our approach employs a fast simplified collision checker and a learning method. The learner predicts trajectories for the object. These are optimised post prediction to minimise interpenetrations according to the collision checker. In addition we show that cleaning the training data prior to learning can also improve performance. Combining both approaches results in consistently strong prediction performance. The new simulator outperforms previous learning based approaches on a single contact push manipulation prediction task. We also present results showing that the method works for multicontact manipulation, for which rigid body simulators are notoriously unstable. I.