Results

**11 - 16**of**16**### Content-based Modeling of Reciprocal Relationships using Hawkes and Gaussian Processes

"... Abstract There has been growing interest in inferring implicit social structures using interaction data. This approach is motivated by the fact that entities organize themselves into groups having frequent interactions between each other. Unlike previous approaches that focused on subjectively decl ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract There has been growing interest in inferring implicit social structures using interaction data. This approach is motivated by the fact that entities organize themselves into groups having frequent interactions between each other. Unlike previous approaches that focused on subjectively declared relationships, the idea is to exploit the actual evidence at hand to reach conclusions about group formations, resulting in more objective data-driven inferences. To this end,

### Action Editor: Rob Kass

"... Functional identification of biological neural networks ..."

(Show Context)
### Reasoning about structured stochastic . . .

, 2011

"... ...different approximate algorithms that are complementary to each other. These algorithms adopt insights from existing state of the art methods for inference in finite dimensional domains while exploiting the continuous time representation to obtain efficient and relatively simple computations that ..."

Abstract
- Add to MetaCart

...different approximate algorithms that are complementary to each other. These algorithms adopt insights from existing state of the art methods for inference in finite dimensional domains while exploiting the continuous time representation to obtain efficient and relatively simple computations that naturally adapt to the dynamics of the process. Our first inference algorithm is based on a Gibbs sampling strategy. This algorithm samples trajectories from the posterior distribution given the evidence and uses these samples to answer queries. We show how to perform this sampling step in an efficient manner with a complexity that naturally adapts to the rate of the posterior process. While it is hard to bound the required run-time in advance, tune the stopping criteria, or estimate the error of the approximation, this algorithm is the first to provide asymptotically unbiased samples for CTBNs. A modern approach for developing state of the art inference algorithms for complex finite dimensional models that are faster than sampling is to use variational principles, where the posterior is approximated by a simpler and easier to manipulate distribution. To adopt this approach we show that candidate distributions can be parameterized

### Tutorial on Structured Continuous-Time Markov Processes

"... A continuous-time Markov process (CTMP) is a collection of variables indexed by a continuous quantity, time. It obeys the Markov property that the distribution over a future variable is independent of past variables given the state at the present time. We introduce continuous-time Markov process rep ..."

Abstract
- Add to MetaCart

(Show Context)
A continuous-time Markov process (CTMP) is a collection of variables indexed by a continuous quantity, time. It obeys the Markov property that the distribution over a future variable is independent of past variables given the state at the present time. We introduce continuous-time Markov process representations and algorithms for filtering, smoothing, expected sufficient statistics calculations, and model estimation, assuming no prior knowledge of continuous-time processes but some basic knowledge of probability and statistics. We begin by describing “flat ” or unstructured Markov processes and then move to structured Markov processes (those arising from state spaces consisting of assignments to variables) including Kronecker, decision-diagram, and continuous-time Bayesian network representations. We provide the first connection between decision-diagrams and continuous-time Bayesian networks. 1. Tutorial Goals This tutorial is intended for readers interested in learning about continuous-time Markov processes, and in particular compact or structured representations of them. It is assumed