## Variational learning for switching state-space models (1998)

Venue: | Neural Computation |

Citations: | 141 - 6 self |

### BibTeX

@ARTICLE{Ghahramani98variationallearning,

author = {Zoubin Ghahramani and Geoffrey E. Hinton},

title = {Variational learning for switching state-space models},

journal = {Neural Computation},

year = {1998},

volume = {12},

pages = {963--996}

}

### Years of Citing Articles

### OpenURL

### Abstract

We introduce a new statistical model for time series which iteratively segments data into regimes with approximately linear dynamics and learns the parameters of each of these linear regimes. This model combines and generalizes two of the most widely used stochastic time series models -- hidden Markov models and linear dynamical systems -- and is closely related to models that are widely used in the control and econometrics literatures. It can also be derived by extending the mixture of experts neural network (Jacobs et al., 1991) to its fully dynamical version, in which both expert and gating networks are recurrent. Inferring the posterior probabilities of the hidden states of this model is computationally intractable, and therefore the exact Expectation Maximization (EM) algorithm cannot be applied. However, we present a variational approximation that maximizes a lower bound on the log likelihood and makes use of both the forward-backward recursions for hidden Markov models and the Kalman lter recursions for linear dynamical systems. We tested the algorithm both on artificial data sets and on a natural data set of respiration force from a patient with sleep apnea. The results suggest that variational approximations are a viable method for inference and learning in switching state-space models.