Results 1 
8 of
8
Causal inference using the algorithmic Markov condition
, 2008
"... Inferring the causal structure that links n observables is usually basedupon detecting statistical dependences and choosing simple graphs that make the joint measure Markovian. Here we argue why causal inference is also possible when only single observations are present. We develop a theory how to g ..."
Abstract

Cited by 22 (18 self)
 Add to MetaCart
(Show Context)
Inferring the causal structure that links n observables is usually basedupon detecting statistical dependences and choosing simple graphs that make the joint measure Markovian. Here we argue why causal inference is also possible when only single observations are present. We develop a theory how to generate causal graphs explaining similarities between single objects. To this end, we replace the notion of conditional stochastic independence in the causal Markov condition with the vanishing of conditional algorithmic mutual information anddescribe the corresponding causal inference rules. We explain why a consistent reformulation of causal inference in terms of algorithmic complexity implies a new inference principle that takes into account also the complexity of conditional probability densities, making it possible to select among Markov equivalent causal graphs. This insight provides a theoretical foundation of a heuristic principle proposed in earlier work. We also discuss how to replace Kolmogorov complexity with decidable complexity criteria. This can be seen as an algorithmic analog of replacing the empirically undecidable question of statistical independence with practical independence tests that are based on implicit or explicit assumptions on the underlying distribution. email:
Detecting the Direction of Causal Time Series
"... We propose a method that detects the true direction of time series, by fitting an autoregressive moving average model to the data. Whenever the noise is independent of the previous samples for one ordering of the observations, but dependent for the opposite ordering, we infer the former direction to ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
We propose a method that detects the true direction of time series, by fitting an autoregressive moving average model to the data. Whenever the noise is independent of the previous samples for one ordering of the observations, but dependent for the opposite ordering, we infer the former direction to be the true one. We prove that our method works in the population case as long as the noise of the process is not normally distributed (for the latter case, the direction is not identifiable). A new and important implication of our result is that it confirms a fundamental conjecture in causal reasoning — if after regression the noise is independent of signal for one direction and dependent for the other, then the former represents the true causal direction — in the case of time series. We test our approach on two types of data: simulated data sets conforming to our modeling assumptions, and real world EEG time series. Our method makes a decision for a significant fraction of both data sets, and these decisions are mostly correct. For real world data, our approach outperforms alternative solutions to the problem of time direction recovery. 1.
On the entropy production of time series with unidirectional linearity
 Journ. Stat. Phys
"... linearity ..."
(Show Context)
Comparative Analysis of Viterbi Training and Maximum Likelihood Estimation for HMMs
"... We present an asymptotic analysis of Viterbi Training (VT) and contrast it with a more conventional Maximum Likelihood (ML) approach to parameter estimation in Hidden Markov Models. While ML estimator works by (locally) maximizing the likelihood of the observed data, VT seeks to maximize the probabi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present an asymptotic analysis of Viterbi Training (VT) and contrast it with a more conventional Maximum Likelihood (ML) approach to parameter estimation in Hidden Markov Models. While ML estimator works by (locally) maximizing the likelihood of the observed data, VT seeks to maximize the probability of the most likely hidden state sequence. We develop an analytical framework based on a generating function formalism and illustrate it on an exactly solvable model of HMM with one unambiguous symbol. For this particular model the ML objective function is continuously degenerate. VT objective, in contrast, is shown to have only finite degeneracy. Furthermore, VT converges faster and results in sparser (simpler) models, thus realizing an automatic Occam’s razor for HMM learning. For more general scenario VT can be worse compared to ML but still capable of correctly recovering most of the parameters. 1
Contents
, 2011
"... 2.1 Causal Discovery with Hidden Variables.............. 5 2.2 Software................................ 6 ..."
Abstract
 Add to MetaCart
2.1 Causal Discovery with Hidden Variables.............. 5 2.2 Software................................ 6
Proceedings of the TwentyThird International Joint Conference on Artificial Intelligence Statistical Tests for the Detection of the Arrow of Time in Vector Autoregressive Models
"... The problem of detecting the direction of time in vector Autoregressive (VAR) processes using statistical techniques is considered. By analogy to causal AR(1) processes with nonGaussian noise, we conjecture that the distribution of the time reversed residuals of a linear VAR model is closer to a Ga ..."
Abstract
 Add to MetaCart
The problem of detecting the direction of time in vector Autoregressive (VAR) processes using statistical techniques is considered. By analogy to causal AR(1) processes with nonGaussian noise, we conjecture that the distribution of the time reversed residuals of a linear VAR model is closer to a Gaussian than the distribution of actual residuals in the forward direction. Experiments with simulated data illustrate the validity of the conjecture. Based on these results, we design a decision rule for detecting the direction of VAR processes. The correct direction in time (forward) is the one in which the residuals of the time series are less Gaussian. A series of experiments illustrate the superior results of the proposed rule when compared with other methods based on independence tests. 1
Contents
, 2009
"... 2.1 Causal Discovery with Known Variables.............. 4 2.2 Causal Discovery with Hidden Variables.............. 7 ..."
Abstract
 Add to MetaCart
2.1 Causal Discovery with Known Variables.............. 4 2.2 Causal Discovery with Hidden Variables.............. 7