Deriving pairwise transfer entropy fromnetwork structure and motifs

Leonardo Novelli, Fatihcan M. Atay, Jürgen Jost, Joseph T. Lizier

Research output: Contribution to journalArticleResearchpeer-review

15 Citations (Scopus)

Abstract

Transfer entropy (TE) is an established method for quantifying directed statistical dependencies in neuroimaging and complex systems datasets. The pairwise (or bivariate) TE from a source to a target node in a network does not depend solely on the local source-target link weight, but on the wider network structure that the link is embedded in. This relationship is studied using a discrete-time linearly coupled Gaussian model, which allows us to derive the TE for each link from the network topology. It is shown analytically that the dependence on the directed link weight is only a first approximation, valid for weak coupling. More generally, the TE increases with the in-degree of the source and decreases with the in-degree of the target, indicating an asymmetry of information transfer between hubs and low-degree nodes. In addition, the TE is directly proportional to weighted motif counts involving common parents or multiple walks from the source to the target, which are more abundant in networks with a high clustering coefficient than in random networks. Our findings also apply to Granger causality, which is equivalent to TE for Gaussian variables. Moreover, similar empirical results on random Boolean networks suggest that the dependence of the TE on the in-degree extends to nonlinear dynamics.

Original languageEnglish
Article number20190779
JournalProceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences
Volume476
Issue number2236
DOIs
Publication statusPublished - 29 Apr 2020
Externally publishedYes

Keywords

  • Connectome
  • Information theory
  • Motifs
  • Network inference
  • Transfer entropy

Cite this