Results 11  20
of
266
Efficient Simulation from the Multivariate Normal and Studentt Distributions Subject to Linear Constraints and the Evaluation of Constraint Probabilities
, 1991
"... The construction and implementation of a Gibbs sampler for efficient simulation from the truncated multivariate normal and Studentt distributions is described. It is shown how the accuracy and convergence of integrals based on the Gibbs sample may be constructed, and how an estimate of the probabil ..."
Abstract

Cited by 132 (8 self)
 Add to MetaCart
The construction and implementation of a Gibbs sampler for efficient simulation from the truncated multivariate normal and Studentt distributions is described. It is shown how the accuracy and convergence of integrals based on the Gibbs sample may be constructed, and how an estimate of the probability of the constraint set under the unrestricted distribution may be produced. Keywords: Bayesian inference; Gibbs sampler; Monte Carlo; multiple integration; truncated normal This paper was prepared for a presentation at the meeting Computing Science and Statistics: the TwentyThird Symposium on the Interface, Seattle, April 2224, 1991. Research assistance from Zhenyu Wang and financial support from National Science Foundation Grant SES8908365 are gratefully acknowledged. The software for the examples may be requested by electronic mail, and will be returned by that medium. 2 1. Introduction The generation of random samples from a truncated multivariate normal distribution, that is, a ...
Data fusion for visual tracking with particles
 Proc. IEEE
, 2004
"... Abstract—The effectiveness of probabilistic tracking of objects in image sequences has been revolutionized by the development of particle filtering. Whereas Kalman filters are restricted to Gaussian distributions, particle filters can propagate more general distributions, albeit only approximately. ..."
Abstract

Cited by 128 (2 self)
 Add to MetaCart
Abstract—The effectiveness of probabilistic tracking of objects in image sequences has been revolutionized by the development of particle filtering. Whereas Kalman filters are restricted to Gaussian distributions, particle filters can propagate more general distributions, albeit only approximately. This is of particular benefit in visual tracking because of the inherent ambiguity of the visual world that stems from its richness and complexity. One important advantage of the particle filtering framework is that it allows the information from different measurement sources to be fused in a principled manner. Although this fact has been acknowledged before, it has not been fully exploited within a visual tracking context. Here we introduce generic importance sampling mechanisms for data fusion and discuss them for fusing color with either stereo sound, for teleconferencing, or with motion, for surveillance with a still camera. We show how each of the three cues can be modeled by an appropriate data likelihood function, and how the intermittent cues (sound or motion) are best handled by generating proposal distributions from their likelihood functions. Finally, the effective fusion of the cues by particle filtering is demonstrated on real teleconference and surveillance data. Index Terms — Visual tracking, data fusion, particle filters, sound, color, motion I.
Policy Recognition in the Abstract Hidden Markov Model
 Journal of Artificial Intelligence Research
, 2002
"... In this paper, we present a method for recognising an agent's behaviour in dynamic, noisy, uncertain domains, and across multiple levels of abstraction. We term this problem online plan recognition under uncertainty and view it generally as probabilistic inference on the stochastic process represen ..."
Abstract

Cited by 121 (16 self)
 Add to MetaCart
In this paper, we present a method for recognising an agent's behaviour in dynamic, noisy, uncertain domains, and across multiple levels of abstraction. We term this problem online plan recognition under uncertainty and view it generally as probabilistic inference on the stochastic process representing the execution of the agent's plan. Our contributions in this paper are twofold. In terms of probabilistic inference, we introduce the Abstract Hidden Markov Model (AHMM), a novel type of stochastic processes, provide its dynamic Bayesian network (DBN) structure and analyse the properties of this network. We then describe an application of the RaoBlackwellised Particle Filter to the AHMM which allows us to construct an ecient, hybrid inference method for this model. In terms of plan recognition, we propose a novel plan recognition framework based on the AHMM as the plan execution model. The RaoBlackwellised hybrid inference for AHMM can take advantage of the independence properties inherent in a model of plan execution, leading to an algorithm for online probabilistic plan recognition that scales well with the number of levels in the plan hierarchy. This illustrates that while stochastic models for plan execution can be complex, they exhibit special structures which, if exploited, can lead to efficient plan recognition algorithms. We demonstrate the usefulness of the AHMM framework via a behaviour recognition system in a complex spatial environment using distributed video surveillance data.
Assessment and Propagation of Model Uncertainty
, 1995
"... this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the ..."
Abstract

Cited by 113 (0 self)
 Add to MetaCart
this paper I discuss a Bayesian approach to solving this problem that has long been available in principle but is only now becoming routinely feasible, by virtue of recent computational advances, and examine its implementation in examples that involve forecasting the price of oil and estimating the chance of catastrophic failure of the U.S. Space Shuttle.
Markov Chain Monte Carlo Simulation Methods in Econometrics
, 1993
"... We present several Markov chain Monte Carlo simulation methods that have been widely used in recent years in econometrics and statistics. Among these is the Gibbs sampler, which has been of particular interest to econometricians. Although the paper summarizes some of the relevant theoretical literat ..."
Abstract

Cited by 91 (5 self)
 Add to MetaCart
We present several Markov chain Monte Carlo simulation methods that have been widely used in recent years in econometrics and statistics. Among these is the Gibbs sampler, which has been of particular interest to econometricians. Although the paper summarizes some of the relevant theoretical literature, its emphasis is on the presentation and explanation of applications to important models that are studied in econometrics. We include a discussion of some implementation issues, the use of the methods in connection with the EM algorithm, and how the methods can be helpful in model specification questions. Many of the applications of these methods are of particular interest to Bayesians, but we also point out ways in which frequentist statisticians may find the techniques useful.
Numerical Techniques for Maximum Likelihood Estimation of ContinuousTime Diffusion Processes
 JOURNAL OF BUSINESS AND ECONOMIC STATISTICS
, 2001
"... Stochastic differential equations often provide a convenient way to describe the dynamics of economic and financial data, and a great deal of effort has been expended searching for efficient ways to estimate models based on them. Maximum likelihood is typically the estimator of choice; however, sinc ..."
Abstract

Cited by 87 (0 self)
 Add to MetaCart
Stochastic differential equations often provide a convenient way to describe the dynamics of economic and financial data, and a great deal of effort has been expended searching for efficient ways to estimate models based on them. Maximum likelihood is typically the estimator of choice; however, since the transition density is generally unknown, one is forced to approximate it. The simulationbased approach suggested by Pedersen (1995) has great theoretical appeal, but previously available implementations have been computationally costly. We examine a variety of numerical techniques designed to improve the performance of this approach. Synthetic data generated by a CIR model with parameters calibrated to match monthly observations of the U.S. shortterm interest rate are used as a test case. Since the likelihood function of this process is known, the quality of the approximations can be easily evaluated. On data sets with 1000 observations, we are able to approximate the maximum likelihood estimator with negligible error in well under one minute. This represents something on the order of a 10,000fold reduction in computational effort as compared to implementations without these enhancements. With other parameter settings designed to stress the methodology, performance remains strong. These ideas are easily generalized to multivariate settings and (with some additional work) to latent variable models. To illustrate, we estimate a simple stochastic volatility model of the U.S. shortterm interest rate.
Regeneration in Markov Chain Samplers
, 1994
"... Markov chain sampling has received considerable attention in the recent literature, in particular in the context of Bayesian computation and maximum likelihood estimation. This paper discusses the use of Markov chain splitting, originally developed as a tool for the theoretical analysis of general s ..."
Abstract

Cited by 84 (5 self)
 Add to MetaCart
Markov chain sampling has received considerable attention in the recent literature, in particular in the context of Bayesian computation and maximum likelihood estimation. This paper discusses the use of Markov chain splitting, originally developed as a tool for the theoretical analysis of general state space Markov chains, to introduce regeneration times into Markov chain samplers. This allows the use of regenerative methods for analyzing the output of these samplers, and can also provide a useful diagnostic of the performance of the samplers. The general approach is applied to several different samplers and is illustrated in a number of examples. 1 Introduction In Markov chain Monte Carlo, a distribution ß is examined by obtaining sample paths from a Markov chain constructed to have equilibrium distribution ß. This approach was introduced by Metropolis et al. (1953) and has recently received considerable attention as a method for examining posterior distributions in Bayesian infer...
Bayesian Treatment of the Independent Studentt Linear Model
 JOURNAL OF APPLIED ECONOMETRICS
, 1993
"... This article takes up methods for Bayesian inference in a linear model in which the disturbances are independent and have identical Studentt distributions. It exploits the equivalence of the Studentt distribution and an appropriate scale mixture of normals, and uses a Gibbs sampler to perform the ..."
Abstract

Cited by 75 (2 self)
 Add to MetaCart
This article takes up methods for Bayesian inference in a linear model in which the disturbances are independent and have identical Studentt distributions. It exploits the equivalence of the Studentt distribution and an appropriate scale mixture of normals, and uses a Gibbs sampler to perform the computations. The new method is applied to some wellknown macroeconomic time series. It is found that posterior odds ratios favor the independent Studentt linear model over the normal linear model, and that the posterior odds ratio in favor of difference stationarity over trend stationarity is often substantially less in the favored Studentt models.
A Sequential Particle Filter Method for Static Models
, 2000
"... Particle filter methods are complex inference procedures, which combine importance sampling and Monte Carlo schemes, in order to consistently explore a sequence of multiple distributions of interest. The purpose of this article is to show that such methods can also offer an efficient estimation tool ..."
Abstract

Cited by 61 (2 self)
 Add to MetaCart
Particle filter methods are complex inference procedures, which combine importance sampling and Monte Carlo schemes, in order to consistently explore a sequence of multiple distributions of interest. The purpose of this article is to show that such methods can also offer an efficient estimation tool in "static" setups; in this case, π(θy_1, ..., y_N) is the only posterior distribution of interest but the preliminary exploration of partial posteriors π(θy_1, ..., y_N) (n < N) makes computing time savings possible. A complete "blackbox" algorithm is proposed for independent or Markov models. Our method is shown to possibly challenge other common estimation procedures, in terms of robustness and execution time, especially when the sample size is important. Two classes of examples are discussed and illustrated by numerical results: mixture models and discrete generalized linear models.
Estimating macroeconomic models: a likelihood approach
, 2006
"... This paper shows how particle filtering facilitates likelihoodbased inference in dynamic macroeconomic models. The economies can be nonlinear and/or nonnormal. We describe how to use the output from the particle filter to estimate the structural parameters of the model, those characterizing prefer ..."
Abstract

Cited by 59 (21 self)
 Add to MetaCart
This paper shows how particle filtering facilitates likelihoodbased inference in dynamic macroeconomic models. The economies can be nonlinear and/or nonnormal. We describe how to use the output from the particle filter to estimate the structural parameters of the model, those characterizing preferences and technology, and to compare different economies. Both tasks can be implemented from either a classical or a Bayesian perspective. We illustrate the technique by estimating a business cycle model with investmentspecific technological change, preference shocks, and stochastic volatility.