Results 1  10
of
10
Fast MCMC sampling for Markov jump processes and continuous time Bayesian networks
"... Markov jump processes and continuous time Bayesian networks are important classes of continuous time dynamical systems. In this paper, we tackle the problem of inferring unobserved paths in these models by introducing a fast auxiliary variable Gibbs sampler. Our approach is based on the idea of unif ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Markov jump processes and continuous time Bayesian networks are important classes of continuous time dynamical systems. In this paper, we tackle the problem of inferring unobserved paths in these models by introducing a fast auxiliary variable Gibbs sampler. Our approach is based on the idea of uniformization, and sets up a Markov chain over paths by sampling a finite set of virtual jump times and then running a standard hidden Markov model forward filteringbackward sampling algorithm over states at the set of extant and virtual jump times. We demonstrate significant computational benefits over a stateoftheart Gibbs sampler on a number of continuous time Bayesian networks. 1
ContinuousTime Belief Propagation
, 2010
"... Many temporal processes can be naturally modeled as a stochastic system that evolves continuously over time. The representation language of continuoustime Bayesian networks allows to succinctly describe multicomponent continuoustime stochastic processes. A crucial element in applications of such ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Many temporal processes can be naturally modeled as a stochastic system that evolves continuously over time. The representation language of continuoustime Bayesian networks allows to succinctly describe multicomponent continuoustime stochastic processes. A crucial element in applications of such models is inference. Here we introduce a variational approximation scheme, which is a natural extension of Belief Propagation for continuoustime processes. In this scheme, we view messages as inhomogeneous Markov processes over individual components. This leads to a relatively simple procedure that allows to easily incorporate adaptive ordinary differential equation (ODE) solvers to perform individual steps. We provide the theoretical foundations for the approximation, and show how it performs on a range of networks. Our results demonstrate that our method is quite accurate on singly connected networks, and provides close approximations in more complex ones.
Approximate parameter inference in a stochastic reactiondiffusion model
"... We present an approximate inference approach to parameter estimation in a spatiotemporal stochastic process of the reactiondiffusion type. The continuous space limit of an inference method for Markov jump processes leads to an approximation which is related to a spatial Gaussian process. An effici ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present an approximate inference approach to parameter estimation in a spatiotemporal stochastic process of the reactiondiffusion type. The continuous space limit of an inference method for Markov jump processes leads to an approximation which is related to a spatial Gaussian process. An efficient solution in feature space using a Fourier basis is applied to inference on simulational data. 1
Inference in continuoustime changepoint models
"... We consider the problem of Bayesian inference for continuoustime multistable stochastic systems which can change both their diffusion and drift parameters at discrete times. We propose exact inference and sampling methodologies for two specific cases where the discontinuous dynamics is given by a ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We consider the problem of Bayesian inference for continuoustime multistable stochastic systems which can change both their diffusion and drift parameters at discrete times. We propose exact inference and sampling methodologies for two specific cases where the discontinuous dynamics is given by a Poisson process and a twostate Markovian switch. We test the methodology on simulated data, and apply it to two real data sets in finance and systems biology. Our experimental results show that the approach leads to valid inferences and nontrivial insights. 1
Continuous Time Bayesian Network Approximate Inference and Social Network Applications
, 2009
"... There are many people to whom I owe many thanks for helping me going through this long process of completing a Ph.D. First and foremost, I would like to express my gratitude to my advisor, Dr. Christian R. Shelton, for his unending support, extremely constructive feedback, excellent supervision, and ..."
Abstract
 Add to MetaCart
There are many people to whom I owe many thanks for helping me going through this long process of completing a Ph.D. First and foremost, I would like to express my gratitude to my advisor, Dr. Christian R. Shelton, for his unending support, extremely constructive feedback, excellent supervision, and all the encouragement over the last five years. Without his mentorship, this dissertation would not be possible. The experience of studying under him has been inestimable value to me. I would also like to thank to my current and past committee members: Drs. Giangfranco Ciardo, Eamonn Keogh and Neal Young, for their support, guidance and helpful suggestions. My deepest thanks also go to all the current and former members of Riverside Lab for Artificial Intelligence Research. Many thanks to Jing Xu for helping implement the Gibbs sampling algorithm for CTBNs in Chapter 4. Special thanks also go to former members Dr.
Factored Filtering of ContinuousTime Systems
"... We consider filtering for a continuoustime, or asynchronous, stochastic system where the full distribution over states is too large to be stored or calculated. We assume that the rate matrix of the system can be compactly represented and that the belief distribution is to be approximated as a produ ..."
Abstract
 Add to MetaCart
We consider filtering for a continuoustime, or asynchronous, stochastic system where the full distribution over states is too large to be stored or calculated. We assume that the rate matrix of the system can be compactly represented and that the belief distribution is to be approximated as a product of marginals. The essential computation is the matrix exponential. We look at two different methods for its computation: ODE integration and uniformization of the Taylor expansion. For both we consider approximations in which only a factored belief state is maintained. For factored uniformization
Approximate inference in continuous time GaussianJump processes
"... We present a novel approach to inference in conditionally Gaussian continuous time stochastic processes, where the latent process is a Markovian jump process. We first consider the case of jumpdiffusion processes, where the drift of a linear stochastic differential equation can jump at arbitrary ti ..."
Abstract
 Add to MetaCart
We present a novel approach to inference in conditionally Gaussian continuous time stochastic processes, where the latent process is a Markovian jump process. We first consider the case of jumpdiffusion processes, where the drift of a linear stochastic differential equation can jump at arbitrary time points. We derive partial differential equations for exact inference and present a very efficient mean field approximation. By introducing a novel lower bound on the free energy, we then generalise our approach to Gaussian processes with arbitrary covariance, such as the nonMarkovian RBF covariance. We present results on both simulated and real data, showing that the approach is very accurate in capturing latent dynamics and can be useful in a number of real data modelling tasks.
Chapter 1 Approximate inference for continuoustime Markov processes
"... Markov processes are probabilistic models for describing data with a sequential structure. Probably the most common example is a dynamical system, of which the state evolves over time. For modelling purposes it is often convenient to assume that the system states are not directly observed: each obse ..."
Abstract
 Add to MetaCart
Markov processes are probabilistic models for describing data with a sequential structure. Probably the most common example is a dynamical system, of which the state evolves over time. For modelling purposes it is often convenient to assume that the system states are not directly observed: each observation is a possibly
Submitted to the Senate of the Hebrew University
"... ii ent approximate algorithms that are complementary to each other. These algorithms adopt insights from existing state of the art methods for inference in finite dimensional domains while exploiting the continuous time representation to obtain efficient and relatively simple computations that natur ..."
Abstract
 Add to MetaCart
ii ent approximate algorithms that are complementary to each other. These algorithms adopt insights from existing state of the art methods for inference in finite dimensional domains while exploiting the continuous time representation to obtain efficient and relatively simple computations that naturally adapt to the dynamics of the process. Our first inference algorithm is based on a Gibbs sampling strategy. This algorithm samples trajectories from the posterior distribution given the evidence and uses these samples to answer queries. We show how to perform this sampling step in an efficient manner with a complexity that naturally adapts to the rate of the posterior process. While it is hard to bound the required runtime in advance, tune the stopping criteria, or estimate the error of the approximation, this algorithm is the first to provide asymptotically unbiased samples for CTBNs. A modern approach for developing state of the art inference algorithms for complex finite dimensional models that are faster than sampling is to use variational principles, where the posterior is approximated by a simpler and easier to manipulate distribution. To adopt this approach we show that candidate distributions can be parameterized
Multiplicative Forests for ContinuousTime Processes
"... Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuoustime Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per varia ..."
Abstract
 Add to MetaCart
Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuoustime Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partitionbased representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability. 1