Results 1 
6 of
6
Estimation of a Structural Vector Autoregression Model Using NonGaussianity
"... Analysis of causal effects between continuousvalued variables typically uses either autoregressive models or structural equation models with instantaneous effects. Estimation of Gaussian, linear structural equation models poses serious identifiability problems, which is why it was recently proposed ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Analysis of causal effects between continuousvalued variables typically uses either autoregressive models or structural equation models with instantaneous effects. Estimation of Gaussian, linear structural equation models poses serious identifiability problems, which is why it was recently proposed to use nonGaussian models. Here, we show how to combine the nonGaussian instantaneous model with autoregressive models. This is effectively what is called a structural vector autoregression (SVAR) model, and thus our work contributes to the longstanding problem of how to estimate SVAR’s. We show that such a nonGaussian model is identifiable without prior knowledge of network structure. We propose computationally efficient methods for estimating the model, as well as methods to assess the significance of the causal influences. The model is successfully applied on financial and brain imaging data.
Causality discovery with additive disturbances: An informationtheoretical perspective
 In Machine Learning and Knowledge Discovery in Databases
, 2009
"... Abstract. We consider causally sufficient acyclic causal models in which the relationship among the variables is nonlinear while disturbances have linear effects, and show that three principles, namely, the causal Markov condition (together with the independence between each disturbance and the corr ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract. We consider causally sufficient acyclic causal models in which the relationship among the variables is nonlinear while disturbances have linear effects, and show that three principles, namely, the causal Markov condition (together with the independence between each disturbance and the corresponding parents), minimum disturbance entropy, and mutual independence of the disturbances, are equivalent. This motivates new and more efficient methods for some causal discovery problems. In particular, we propose to use multichannel blind deconvolution, an extension of independent component analysis, to do Granger causality analysis with instantaneous effects. This approach gives more accurate estimates of the parameters and can easily incorporate sparsity constraints. For additive disturbancebased nonlinear causal discovery, we first make use of the conditional independence relationships to obtain the equivalence class; undetermined causal directions are then found by nonlinear regression and pairwise independence tests. This avoids the bruteforce search and greatly reduces the computational load. 1
Convolutive Blind Source Separation by Efficient Blind Deconvolution and Minimal Filter Distortion
"... Convolutive blind source separation (BSS) usually encounters two difficulties – the filter indeterminacy in the recovered sources and the relatively high computational load. In this paper we propose an efficient method to convolutive BSS, by dealing with these two issues. It consists of two stages, ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Convolutive blind source separation (BSS) usually encounters two difficulties – the filter indeterminacy in the recovered sources and the relatively high computational load. In this paper we propose an efficient method to convolutive BSS, by dealing with these two issues. It consists of two stages, namely, multichannel blind deconvolution (MBD) and learning the postfilters with the minimum filter distortion (MFD) principle. We present a computationally efficient approach to MBD in the first stage: a vector autoregression (VAR) model is first fitted to the data, admitting a closedform solution and giving temporally independent errors; traditional independent component analysis (ICA) is then applied to these errors to produce the MBD results. In the second stage, the least linear reconstruction error (LLRE) constraint of the separation system, which was previously used to regularize the solutions to nonlinear ICA, enforces a MFD
On Causal Discovery from Time Series Data using FCI
"... We adapt the Fast Causal Inference (FCI) algorithm of Spirtes et al. (2000) to the problem of inferring causal relationships from time series data and evaluate our adaptation and the original FCI algorithm, comparing them to other methods including Granger causality. One advantage of FCI based appro ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We adapt the Fast Causal Inference (FCI) algorithm of Spirtes et al. (2000) to the problem of inferring causal relationships from time series data and evaluate our adaptation and the original FCI algorithm, comparing them to other methods including Granger causality. One advantage of FCI based approaches is the possibility of taking latent confounding variables into account, as opposed to methods based on Granger causality. From simulations we see, however, that while the FCI based approaches are in principle quite powerful for finding causal relationships in time series data, such methods are not very reliable for most practical sample sizes. We further apply the framework to microeconomic data on the dynamics of firm growth. By releasing the full computer code for the method we hope to facilitate the application of the procedure to other domains. 1
Proceedings of the TwentyThird International Joint Conference on Artificial Intelligence MultiDimensional Causal Discovery
"... We propose a method for learning causal relations within highdimensional tensor data as they are typically recorded in nonexperimental databases. The method allows the simultaneous inclusion of numerous dimensions within the data analysis such as samples, time and domain variables construed as ten ..."
Abstract
 Add to MetaCart
We propose a method for learning causal relations within highdimensional tensor data as they are typically recorded in nonexperimental databases. The method allows the simultaneous inclusion of numerous dimensions within the data analysis such as samples, time and domain variables construed as tensors. In such tensor data we exploit and integrate nonGaussian models and tensor analytic algorithms in a novel way. We prove that we can determine simple causal relations independently of how complex the dimensionality of the data is. We rely on a statistical decomposition that flattens higherdimensional data tensors into matrices. This decomposition preserves the causal information and is therefore suitable for structure learning of causal graphical models, where a causal relation can be generalised beyond dimension, for example, over all time points. Related methods either focus on a set of samples for instantaneous effects or look at one sample for effects at certain time points. We evaluate the resulting algorithm and discuss its performance both with synthetic and realworld data. 1