Search

Talk

Justifying a new type of causal inference methods by the algorithmic Markov condition

  • Dominik Janzing (MPI für biologische Kybernetik,Tübingen)
A3 02 (Seminar room)

Abstract

Inferring the causal structure that links n observables is usually based upon detecting statistical dependences and choosing simple graphs making the joint distribution Markovian. Here we argue why causal inference is also possible when only single observations are present.

We state that similarities between two objects x and y indicate a causal link whenever their algorithmic mutual information is sufficiently high. It is defined as the number of bits that can be saved when optimally compressing the pair (x,y) jointly compared to compressing them independently. To infer causal graphs among n objects, we replace the notion of conditional *stochastic* independence in the causal Markov condition with the one of conditional *algorithmic* mutual information and describe the corresponding causal inference rules.

In contrast to causal inference methods that rely on statistical dependences, our theory implies rules for distinguishing between the causal hypotheses X --> Y and Y --> X for two random variables X,Y. This is because the causal graphs relating the individual observations of the statistical ensemble induce different sets of algorithmic independences for the two cases. Therefore, our theory provides a foundation for a new type of methods for inferring causal structure from statistical data.

For details see our recent preprint aps.arxiv.org/abs/0804.3678

Katharina Matschke

MPI for Mathematics in the Sciences Contact via Mail