<<

Transfer entropy in continuous time, with applications to jump and neural spiking processes

Dr. Joseph T. Lizier ARC DECRA Fellow / Senior Lecturer

Centre for Complex Systems The University of Sydney

December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 2

Manuscripts

� Richard E. Spinney, Mikhail Prokopenko and Joseph T. Lizier, “Transfer entropy in continuous time, with applications to jump and neural spiking processes”, in review, 2016. arXiv:1610.08192 � Terry Bossomaier, Lionel Barnett, Michael Harr´eand Joseph T. Lizier, “An Introduction to Transfer Entropy: Flow in Complex Systems”, Springer, 2016. � Joseph T. Lizier, Richard E. Spinney, Mikail Rubinov, Michael Wibral and Viola Priesemann, “A nearest-neighbours based estimator for transfer entropy between spike trains”, in preparation, 2016

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 3

Motivation

Measuring directed information transmission in neural recordings allows us to: � characterise neural computation in terms of information storage, transfer and modification (Wibral et al., 2015); � detect informationflows in space and time (Wibral et al., 2014a); � investigate why critical dynamics are computationally important (Barnett et al., 2013; Boedecker et al., 2012; Priesemann et al., 2015); � infer effective information networks (Lizier et al., 2011; Vicente et al., 2011; Wibral et al., 2011) – see e.g. TRENTOOL (Lindner et al., 2011).

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 4

Motivation

� Transfer entropy is the tool of choice for measuring directed information transmission in neural recordings. � It has been used across all modalities, and for multiple purposes

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 4

Motivation

� Transfer entropy is the tool of choice for measuring directed information transmission in neural recordings. � It has been used across all modalities, and for multiple purposes

� Yet precise method of application to spike trains remains unclear: � With time-binning, how to set bin sizes / history length appropriately, capturing all relationships and avoiding undersampling? � Can we remain in continuous time, and is this more accurate?

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 5

Outline

1 Transfer entropy

2 Continuous-time TE

3 Transfer entropy in spike trains

4 TE examples for spike trains

5 TE estimator for spike trains

6 Summary

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 6

Information dynamics

Key question: how is the next state of a variable in a complex system computed?

Q: Where does the information inx n+1 come from, and how can we measure it?

Q: How much was stored, how much was transferred, can we partition them or do they overlap?

Complex system as a multivariate time-series of states

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 7

Information dynamics Studies computation of the next state of a target variable in terms of information storage, transfer and modification: (Lizier et al., 2008, 2010, 2012)

The measures examine:

� State updates of a target variable; � Dynamics of the measures in space and time.

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 8

Active information storage (Lizier et al., 2012)

How much information about the next observationX n+1 of process (k) X can be found in its past state Xn = X n k+1 ...X n 1,X n ? { − − }

Active information storage: (k) (k) AX =I(X n+1;X n ) p(x x(k)) = log n+1| n 2 p(xn+1) Average� information from� past state that is in use in predicting the next value.

http://jlizier.github.io/jidt

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 9

Information transfer (k) How much information about the state transition Xn X n+1 of (l) → X can be found in the past state Yn of a source processY? Transfer entropy: (Schreiber, 2000) (k,l) (l) (k) TY X =I(Y n ;X n+1 X n ) → p(x x(k)|,y(l))) = log n+1| n n 2 p(x x(k))) n+1| n Average� info from source that� helps predict next value in context of past.

http://jlizier.github.io/jidt

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 9

Information transfer (k) How much information about the state transition Xn X n+1 of (l) → X can be found in the past state Yn of a source processY? Transfer entropy: (Schreiber, 2000) (k,l) (l) (k) TY X =I(Y n ;X n+1 X n ) → p(x x(k)|,y(l))) = log n+1| n n 2 p(x x(k))) n+1| n Average� info from source that� helps predict next value in context of past.

Storage and transfer are complementary: http://jlizier.github.io/jidt HX =A X +T Y X + higher order terms →

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 9

Information transfer (k) How much information about the state transition Xn X n+1 of (l) → X can be found in the past state Yn of a source processY? Transfer entropy: (Schreiber, 2000) (k,l) (l) (k) TY X =I(Y n ;X n+1 X n ) → p(x x(k)|,y(l))) = log n+1| n n 2 p(x x(k))) n+1| n Average� info from source that� helps predict next value in context of past. Local transfer entropy: (Lizier et al., 2008) (k,l) (l) (k) tY X =i(y n ;x n+1 x n ) → p(x x(k|),y(l)) = log n+1| n n 2 p(x x(k)) n+1| n Storage and transfer are complementary: http://jlizier.github.io/jidt HX =A X +T Y X + higher order terms →

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 10

Information dynamics in CAs cells 1 + 1 γ+ γ- γ α 5 5 α γ+ 0.5 0.8 γ- 10 10 0

-0.5 15 0.6 15 β γ- Domains and blinkers time γ- -1 20 20 α 0.4 α -1.5 are the dominant 25 25 - + γ - γ -2 0.2 γ 30 30 -2.5 information storage - α + γ γ -3 35 0 35 5 10 15 20 25 30 35 5 10 15 20 25 30 35 entities. (a) Raw CA (b) LAIS

+ + + + γ γ- γ 3 γ γ- γ 3 5 + 5 + - γ 2.5 γ- γ 2.5 Gliders are the γ 10 10 2 2

15 - 1.5 15 - 1.5 dominant information - γ - γ γ γ 1 20 1 20 transfer entities. - 0.5 - 0.5 25 γ - + 25 γ - + γ γ 0 γ γ 0 30 30 - γ+ -0.5 - γ+ -0.5 (Lizier et al., 2014). 35 γ 35 γ 5 10 15 20 25 30 35 5 10 15 20 25 30 35 (c) LTE right (d) LTE left

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 11

TE in computational neuroscience

Measuring directed information transmission using transfer entropy in neural recordings allows us to (Wibral et al., 2014a): � characterise neural computation in terms of information storage, transfer and modification (Wibral et al., 2015); � detect informationflows in space and time (Wibral et al., 2014a); � investigate why critical dynamics are computationally important (Barnett et al., 2013; Boedecker et al., 2012; Priesemann et al., 2015); � infer effective information networks (Lizier et al., 2011; Vicente et al., 2011; Wibral et al., 2011).

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 12

TE applied to spike trains

TE has been applied to spike trains, by time-binning to create a binary time series.

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 12

TE applied to spike trains

TE has been applied to spike trains, by time-binning to create a binary time series.

For example by: � Ito et al. (2011), over multiple delays for effective network inference. See also Timme et al. (2014, 2016). � Priesemann et al. (2015), to investigate relationship to criticality.

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 13

TE applied to spike trains

But: how do we choose bin size and history, to capture full subtleties of relationship, but avoid undersampling?

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 13

TE applied to spike trains

But: how do we choose bin size and history, to capture full subtleties of relationship, but avoid undersampling? 1 Bin for max entropy – but this puts multiple spikes in one bin, misses timing subtleties.

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 13

TE applied to spike trains

But: how do we choose bin size and history, to capture full subtleties of relationship, but avoid undersampling? 1 Bin for max entropy – but this puts multiple spikes in one bin, misses timing subtleties. 2 Aim for one spike / bin – Ito et al. (2011) found performance increase with small bin sizes. But scale of bins becomes much smaller than relevant history period k . → ⇑ You either sample well but miss a lot of relevant history OR → catch much history and undersample. (Think: criticality!)

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 14

TE applied to spike trains

Conjecture: if we can stay in continuous time, we will more compactly represent spike-time histories: � Should be a more data-efficient approach; � Promises greater accuracy than time-binning ...

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 15

Step 1: Rethink TE definition (discrete-time)

As a Radon-Nikodym derivative between two measures on (target) random variablex n:

n 1 n 1 (k,l) n dP n(xn xn−k ,y n −l ) Ty x =E P ln | − − → n 1 n 1 − � dP n(xn xn−k ) � | − � n 1 n 1 � dP n(xn x − ,y − ) | n k n l = ln − n 1 − (ω)dP(ω). Ω dP n(xn xn−k ) � | −

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 15

Step 1: Rethink TE definition (discrete-time)

As a Radon-Nikodym derivative between two measures on (target) random variablex n:

n 1 n 1 (k,l) n dP n(xn xn−k ,y n −l ) Ty x =E P ln | − − → n 1 n 1 − � dP n(xn xn−k ) � | − � n 1 n 1 � dP n(xn x − ,y − ) | n k n l = ln − n 1 − (ω)dP(ω). Ω dP n(xn xn−k ) � | − Extend local TE definition also:

n 1 n 1 (k,l) n n 1 dP n(xn xn−k ,y n −l ) y x (xn k ,y − ) = ln | − − → n l n 1 T − − dP n(xn xn−k ) | −

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 16

Step 1: Rethink TE definition (discrete-time)

n 1 n 1 (k,l) n dP n(xn xn−k ,y n −l ) Average TE Ty x =E P ln | − − (1) → n 1 n 1 − � dP n(xn xn−k ) � | − � n 1 n 1 (k,l) n n� 1 dP n(xn xn−k ,y n −l ) Local TE y x (xn k ,y − ) = ln | − − (2) → n l n 1 T − − dP n(xn xn−k ) | − (3)

1 dP becomes probabilities and probability densities for discrete and continuous variables respectively.

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 16

Step 1: Rethink TE definition (discrete-time)

n 1 n 1 (k,l) n dP n(xn xn−k ,y n −l ) Average TE Ty x =E P ln | − − (1) → n 1 n 1 − � dP n(xn xn−k ) � | − � n 1 n 1 � dP n(xn x − ,y − ) Local TE (k,l) n n 1 | n k n l y x (xn k ,y n −l ) = ln − n 1 − (2) T → − − dP n(xn x − ) | n k m − (k,l) n+m n+m 1 (k,l) n+i n+i 1 (Local) y x (xn k ,y n l − ) = y x (xn k+i ,y n l+−i ). (3) T → − − T → − − Integrated TE �i=0 1 dP becomes probabilities and probability densities for discrete and continuous variables respectively. 2 Local is associated with one realisation; � can generalise to longer intervals.

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 16

Step 1: Rethink TE definition (discrete-time)

n 1 n 1 (k,l) n dP n(xn xn−k ,y n −l ) Average TE rate Ty x =E P ln | − − (1) → n 1 n 1 − � dP n(xn xn−k ) � | − � n 1 n 1 � dP n(xn x − ,y − ) Local TE (k,l) n n 1 | n k n l y x (xn k ,y n −l ) = ln − n 1 − (2) T → − − dP n(xn x − ) | n k m − (k,l) n+m n+m 1 (k,l) n+i n+i 1 (Local) y x (xn k ,y n l − ) = y x (xn k+i ,y n l+−i ). (3) T → − − T → − − Integrated TE �i=0 1 dP becomes probabilities and probability densities for discrete and continuous variables respectively. 2 Local is associated with one realisation; � can generalise to longer intervals. 3 Stationary processes: empirically we measure the rate (k,l) n (k,l) n+m n+m 1 Ty x n 1 = y x (xn k ,y n l − )/(m + 1). → − T → − − J.T. Lizier � TE in spike trains December 2016 � Outline TE Cont’-time Spike trains Examples Estimator Close 17

Step 2: Continuous-time TE – TE rate

TE naturally extends to consider rates and paths in continuous-time: � paths:x t = x(t ,ω):t t

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 17

Step 2: Continuous-time TE – TE rate

TE naturally extends to consider rates and paths in continuous-time: � paths:x t = x(t ,ω):t t

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 17

Step 2: Continuous-time TE – TE rate

TE naturally extends to consider rates and paths in continuous-time: � paths:x t = x(t ,ω):t t

� We have conditional measures on previous path functions � (s,r) R,s 0,r 0 play role ofk andl ∈ ≥ ≥ � dt and limit introduced

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 18

Step 2: Continuous-time TE – pathwise TE

b. Pathwise transfer entropy: We characterise the information transfer over afinite-time as the integrated or pathwise quantity:

(s,r) t t0 t dP X Y [xt0 xt0 s , y t0 r ] (s,r) t t | − { − } y x [xt s ,y t r ] = ln |{ } . T → 0− 0− (s) t t0 dP X [xt0 xt0 s ] | −

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 18

Step 2: Continuous-time TE – pathwise TE

b. Pathwise transfer entropy: We characterise the information transfer over afinite-time as the integrated or pathwise quantity:

(s,r) t t0 t dP X Y [xt0 xt0 s , y t0 r ] (s,r) t t | − { − } y x [xt s ,y t r ] = ln |{ } . T → 0− 0− (s) t t0 dP X [xt0 xt0 s ] | −

� (s,r) t t0 t dP X Y [xt0 xt0 s , yt0 r ] is not (always) equivalent to a | { } | − { − } t� conditional probability; it is a product of conditionals ony t r at �− eacht �

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 18

Step 2: Continuous-time TE – pathwise TE

b. Pathwise transfer entropy: We characterise the information transfer over afinite-time as the integrated or pathwise quantity:

(s,r) t t0 t dP X Y [xt0 xt0 s , y t0 r ] (s,r) t t | − { − } y x [xt s ,y t r ] = ln |{ } . T → 0− 0− (s) t t0 dP X [xt0 xt0 s ] | −

� (s,r) t t0 t dP X Y [xt0 xt0 s , yt0 r ] is not (always) equivalent to a | { } | − { − } t� conditional probability; it is a product of conditionals ony t r at �− eacht � � TE as a log-likelihood for discrete-time (Barnett and Bossomaier, 2012) is a special case of this formalism.

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 18

Step 2: Continuous-time TE – pathwise TE

b. Pathwise transfer entropy: We characterise the information transfer over afinite-time as the integrated or pathwise quantity:

(s,r) t t0 t dP X Y [xt0 xt0 s , y t0 r ] (s,r) t t | − { − } y x [xt s ,y t r ] = ln |{ } . T → 0− 0− (s) t t0 dP X [xt0 xt0 s ] | −

� (s,r) t t0 t dP X Y [xt0 xt0 s , yt0 r ] is not (always) equivalent to a | { } | − { − } t� conditional probability; it is a product of conditionals ony t r at �− eacht � � TE as a log-likelihood for discrete-time (Barnett and Bossomaier, 2012) is a special case of this formalism. � This is local for one configuration / path realisation t t0 t [xt xt s , y t r ]. 0 | 0− { 0− }

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 19

Continuous-time TE – putting it together

The measures are defined to satisfy: t T (s,r) t = T˙ (s,r) (t )dt = (s,r) [xt ,y t ] . y x t y x � � E P y x t0 s t0 r → 0 → T → − − �t0 � � � � (s,r) t i.e. total TE Ty x accumulated on the interval [t0,t) is: → t0 � �

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 19

Continuous-time TE – putting it together

The measures are defined to satisfy: t T (s,r) t = T˙ (s,r) (t )dt = (s,r) [xt ,y t ] . y x t y x � � E P y x t0 s t0 r → 0 → T → − − �t0 � � � � (s,r) t i.e. total TE Ty x accumulated on the interval [t0,t) is: → t0 � the ensemble average� (over all paths) of the pathwise TE � d (s,r) [xt xt0 , y t ] P X Y t0 t0 s t0 r (s,r) t t | − { − } y x [xt s ,y t r ] = ln |{ } . T → 0− 0− (s) t t0 dP X [xt0 xt0 s ] | −

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 19

Continuous-time TE – putting it together

The measures are defined to satisfy: t T (s,r) t = T˙ (s,r) (t )dt = (s,r) [xt ,y t ] . y x t y x � � E P y x t0 s t0 r → 0 → T → − − �t0 � � � � (s,r) t i.e. total TE Ty x accumulated on the interval [t0,t) is: → t0 � the ensemble average� (over all paths) of the pathwise TE � d (s,r) [xt xt0 , y t ] P X Y t0 t0 s t0 r (s,r) t t | − { − } y x [xt s ,y t r ] = ln |{ } . T → 0− 0− (s) t t0 dP X [xt0 xt0 s ] | − � or transfer entropy rate integrated over path length. t t ˙ (s,r) 1 dP t+dt [xt+dt xt s ,y t r ] Ty x (t) = lim EP ln | − t − → dt 0 dt dP t+dt [xt+dt xt s ] → � | − �

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 20

Continuous-time TE – putting it together

The measures are defined to satisfy:

t T (s,r) t = T˙ (s,r) (t )dt = (s,r) [xt ,y t ] . y x t y x � � E P y x t0 s t0 r → 0 → T → − − �t0 � � � Dual-defi�nition of transfer entropy rate, valid for stationary → processes:

d (s,r) [xt xt0 , y t ] (s,r) 1 P X Y t0 t0 s t0 r T˙ = ln |{ } | − { − } . y x EP (s) → (t t 0)  t t0  dP X [xt0 xt0 s ] − | −  

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 21

Continuous-time TE – local rate?

Does a local transfer entropy rate exist?

t t ˙ (s,r) 1 dP t+dt [xt+dt xt s ,y t r ] y x (t) = lim ln | − t − T → dt 0 dt dP t+dt [xt+dt xt s ] → | −

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 21

Continuous-time TE – local rate?

Does a local transfer entropy rate exist?

t t ˙ (s,r) 1 dP t+dt [xt+dt xt s ,y t r ] y x (t) = lim ln | − t − T → dt 0 dt dP t+dt [xt+dt xt s ] → | − No, this can not be guaranteed. (see later examples)

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 22

Continuous-time TE – alignment with time-discretisation

Where such limits exist:

t/Δt (i 1)Δt (i 1)Δt − − (s,r) t dP iΔt (xiΔt x(i k)Δt ,y (i l)Δt ) | − − Ty x t = lim EP ln (i 1)Δt → 0 Δt 0  −  → i=t /Δt+1 dP iΔt (xiΔt x(i k)Δt ) � 0� | − � t/Δt  (k,l) iΔt s r = lim Ty x (i 1)Δt ,k= + 1,l= + 1 Δt 0 → − Δt Δt → i=t /Δt+1 0� � � � � � �

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 22

Continuous-time TE – alignment with time-discretisation

Where such limits exist:

t/Δt (i 1)Δt (i 1)Δt − − (s,r) t dP iΔt (xiΔt x(i k)Δt ,y (i l)Δt ) | − − Ty x t = lim EP ln (i 1)Δt → 0 Δt 0  −  → i=t /Δt+1 dP iΔt (xiΔt x(i k)Δt ) � 0� | − � t/Δt  (k,l) iΔt s r = lim Ty x (i 1)Δt ,k= + 1,l= + 1 Δt 0 → − Δt Δt → i=t /Δt+1 0� � � � � � � ˙ (s,r) 1 (k,l) t Ty x (t) = lim Ty x , → Δt 0 Δt → t Δt → − � �

Leading 1 term has been overlooked in previous work. → Δt J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 22

Continuous-time TE – alignment with time-discretisation

Where such limits exist:

t/Δt (i 1)Δt (i 1)Δt − − (s,r) t dP iΔt (xiΔt x(i k)Δt ,y (i l)Δt ) | − − Ty x t = lim EP ln (i 1)Δt → 0 Δt 0  −  → i=t /Δt+1 dP iΔt (xiΔt x(i k)Δt ) � 0� | − � t/Δt  (k,l) iΔt s r = lim Ty x (i 1)Δt ,k= + 1,l= + 1 Δt 0 → − Δt Δt → i=t /Δt+1 0� � � � � � � ˙ (s,r) 1 (k,l) t Ty x (t) = lim Ty x , → Δt 0 Δt → t Δt → − � And for stationary and ergodic� processes:

(s,r) 1 (k,l) t/Δt t/Δt 1 ˙ − Ty x = lim y x (xt /Δt k ,y t /Δt l ) → Δt 0 (t t 0)T → 0 − 0 − (t t0→) − − →∞

Leading 1 term has been overlooked in previous work. → Δt J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 23

Continuous-time TE – alignment with time-discretisation

1 Leading Δt term has been overlooked in previous work:

(s,r) 1 (k,l) t T˙ y x (t) = lim Ty x . → Δt 0 Δt → t Δt → − � � � Where a limiting rate exists, appropriateness ofΔt requires (k,l) t Ty x to scale withΔt → t Δt � − We know� a limiting rate exists for coupled Wiener processes where� Granger scales withΔt asΔt 0 (Barnett → and Seth, 2016; Zhou et al., 2014)

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 23

Continuous-time TE – alignment with time-discretisation

1 Leading Δt term has been overlooked in previous work:

(s,r) 1 (k,l) t T˙ y x (t) = lim Ty x . → Δt 0 Δt → t Δt → − � � � Where a limiting rate exists, appropriateness ofΔt requires (k,l) t Ty x to scale withΔt → t Δt � − We know� a limiting rate exists for coupled Wiener processes where� scales withΔt asΔt 0 (Barnett → and Seth, 2016; Zhou et al., 2014)

Empirically – number of bins for a path history length diverges as Δt 0 – are there continuous-time processes which can be → addressed?

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 24

TE in spike trains

Construct path probability densities of an observed spiking process t (path)x t0 , withN x spikes at timest i ,i=1...N x , in terms of ta ta spike ratesλ x [xtb ] (history-dependent onx tb ):

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 24

TE in spike trains

Construct path probability densities of an observed spiking process t (path)x t0 , withN x spikes at timest i ,i=1...N x , in terms of ta ta spike ratesλ x [xtb ] (history-dependent onx tb ):

(ti ti 1 ) N − − x dt t +jdt t t0  ti  i 1  p[xt xt s ] = lim λx [xt s ]dt (1 λ x [xt − +jdt s ]dt) 0 | 0− dt 0  i − − i 1 −  → i=1  j=0 −  �  �        (t t )   − Nx  dt t +jdt Nx (1 λ x [xt +jdt s ]dt) × − Nx − �j=0

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 24

TE in spike trains

Construct path probability densities of an observed spiking process t (path)x t0 , withN x spikes at timest i ,i=1...N x , in terms of ta ta spike ratesλ x [xtb ] (history-dependent onx tb ):

spikei (ti ti 1 ) N − − x dt t +jdt No spikes t t0  ti  i 1  p[xt xt s ] = lim λx [xt s ]dt (1 λ x [xt − +jdt s ]dt) betweeni 1 0 | 0− dt 0  i − − i 1 −  − → i=1  j=0 −  andi �  �        (t tN )  − x No spikes dt t +jdt Nx aftert (1 λ x [xt +jdt s ]dt) Nx × − Nx − �j=0

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 24

TE in spike trains

Construct path probability densities of an observed spiking process t (path)x t0 , withN x spikes at timest i ,i=1...N x , in terms of ta ta spike ratesλ x [xtb ] (history-dependent onx tb ):

spikei (ti ti 1 ) N − − x dt t +jdt No spikes t t0  ti  i 1  p[xt xt s ] = lim λx [xt s ]dt (1 λ x [xt − +jdt s ]dt) betweeni 1 0 | 0− dt 0  i − − i 1 −  − → i=1  j=0 −  andi �  �        (t tN )  − x No spikes dt t +jdt Nx aftert (1 λ x [xt +jdt s ]dt) Nx × − Nx − �j=0 Nx t Use of Taylor expansion ti f t 2 λx dt = λx [x ]dt exp λx [x ]dt +O(dt ). e− = 1 λ x dt ti s − t s − − � t0 − � tofirst order in dt �i=1 �

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 25

TE in spike trains

t t0 t0 Constructp[x t0 xt0 s , y t0 r ] similarly, and then get pathwise TE t t0 | t−0 { − } p[xt xt s , yt r ] 0 | 0− { 0− } via log tf t0 : p[xt xt s ] 0 | 0−

Nx ti ti (s,r) λx y [xti s ,y ti r ] y x (t0,t)= ln | − − → ti T λx [xt s ] i=1 i − � t t t t + λ [x � ] λ [x � ,y � ] dt� x t� s x y t� s t� r t − − | − − � 0 � �

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 25

TE in spike trains

t t0 t0 Constructp[x t0 xt0 s , y t0 r ] similarly, and then get pathwise TE t t0 | t−0 { − } p[xt xt s , yt r ] 0 | 0− { 0− } via log tf t0 : p[xt xt s ] 0 | 0− 1. (Target) Nx λ [xti ,y ti ] (s,r) x y ti s ti r Spike y x (t0,t)= ln | − − → ti T λx [xt s ] component i=1 i − � t t t t 2. Non-spike + λ [x � ] λ [x � ,y � ] dt� x t� s x y t� s t� r t − − | − − component � 0 � � The pathwise TE is composed of: 1 A discontinuous component at target spikes 2 A continuously-varying component at non-spikes in target

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 26

Local TE in spike trains

Nx ti ti (s,r) λx y [xti s ,y ti r ] y x (t0,t)= ln | − − → ti T λx [xt s ] i=1 i − � t t t t + λ [x � ] λ [x � ,y � ] dt� x t� s x y t� s t� r t − − | − − � 0 � � Consider analogues to local TE as contributions to pathwise TE:

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 26

Local TE in spike trains

Nx ti ti (s,r) λx y [xti s ,y ti r ] y x (t0,t)= ln | − − → ti T λx [xt s ] i=1 i − � t t t t + λ [x � ] λ [x � ,y � ] dt� x t� s x y t� s t� r t − − | − − � 0 � � Consider analogues to local TE as contributions to pathwise TE: ti ti (s,r) λx y [xt s ,yt r ] | i − i − 1 Associated with a spikeΔ t (ti ) = ln ti T λx [xt s ] i − ˙ (s,r) 2 local rate associated with non-spikes: nt (t)=λ x λ x y T − | such that: Nx t (s,r) (s,r) ˙ (s,r) y x (t0,t)= Δ t (ti ) + nt (t�)dt�. → T T t0 T �i=1 � J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 27

TE rate for spike trains

Nx ti ti t (s,r) λx y [xt s ,y t r ] t t t | i − i − � � � y x (t0,t)= ln t + λx [xt s ] λ x y [xt s ,y t r ] dt� T → λ [x i ] �− − | �− �− i=1 x ti s t0 � − � � � Consider the average TE rate:

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 27

TE rate for spike trains

Nx ti ti t (s,r) λx y [xt s ,y t r ] t t t | i − i − � � � y x (t0,t)= ln t + λx [xt s ] λ x y [xt s ,y t r ] dt� T → λ [x i ] �− − | �− �− i=1 x ti s t0 � − � � � Consider the average TE rate: t t t ˙ (s,r) Since λx [xt s ] = λx y [xt s ,y t r ] then nt (t) = 0 − | − − T � � � � � �

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 27

TE rate for spike trains

Nx ti ti t (s,r) λx y [xt s ,y t r ] t t t | i − i − � � � y x (t0,t)= ln t + λx [xt s ] λ x y [xt s ,y t r ] dt� T → λ [x i ] �− − | �− �− i=1 x ti s t0 � − � � � Consider the average TE rate: t t t ˙ (s,r) Since λx [xt s ] = λx y [xt s ,y t r ] then nt (t) = 0 − | − − T So expected average contribution to TE� from non-spikes� (in ⇒ � � � � target) disappears! Implied empirical formalism (remove under stationary �·� self-averaging):

Nx ti ti (s,r) 1 λx y [xti s ,y ti r ] T˙ y x = lim ln | − − → ti (t t0) (t t 0) � λx [xt s ] � − →∞ − �i=1 i −

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 28

Implications

� ti ti Average rate will befinite providedλ x y [xti s ,y ti r ] does not diverge. Suggest: if a discrete-time binned| − TE on− spike trains is not converging to zero as yourΔt gets smaller, then it’s not small enough to capture all features:

(s,r) 1 (k,l) t T˙ y x (t) = lim Ty x . → Δt 0 Δt → t Δt → − � �

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 28

Implications

� ti ti Average rate will befinite providedλ x y [xti s ,y ti r ] does not diverge. Suggest: if a discrete-time binned| − TE on− spike trains is not converging to zero as yourΔt gets smaller, then it’s not small enough to capture all features:

(s,r) 1 (k,l) t T˙ y x (t) = lim Ty x . → Δt 0 Δt → t Δt → − � � � Earlier suggestions to focus on transfer at spikes only were actually correct!

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 28

Implications

� ti ti Average rate will befinite providedλ x y [xti s ,y ti r ] does not diverge. Suggest: if a discrete-time binned| − TE on− spike trains is not converging to zero as yourΔt gets smaller, then it’s not small enough to capture all features:

(s,r) 1 (k,l) t T˙ y x (t) = lim Ty x . → Δt 0 Δt → t Δt → − � � � Earlier suggestions to focus on transfer at spikes only were actually correct! � Estimation only has to concentrate on spiking component ...

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 28

Implications

� ti ti Average rate will befinite providedλ x y [xti s ,y ti r ] does not diverge. Suggest: if a discrete-time binned| − TE on− spike trains is not converging to zero as yourΔt gets smaller, then it’s not small enough to capture all features:

(s,r) 1 (k,l) t T˙ y x (t) = lim Ty x . → Δt 0 Δt → t Δt → − � � � Earlier suggestions to focus on transfer at spikes only were actually correct! � Estimation only has to concentrate on spiking component ... � An estimator should detect both rate- and timing-based codings ...

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 29

Example 1 – Poisson copying processes

� Sourcey has independent rateλ y and refractory periodτ r ; � Targetx copies ay spike with probabilitya during an excited period ofτ aftery spikes, then has refractory periodτ r ; � τ τ r ≥

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 29

Example 1 – Poisson copying processes

� Sourcey has independent rateλ y and refractory periodτ r ; � Targetx copies ay spike with probabilitya during an excited period ofτ aftery spikes, then has refractory periodτ r ; � τ τ r ≥ At spikes (tofirst order inλ y ): � λx y = 1/τ log(1 a)(to give probability of a that spike | − − occurs byτ) � λx =aλ y (average spike rate)

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 29

Example 1 – Poisson copying processes

� Sourcey has independent rateλ y and refractory periodτ r ; � Targetx copies ay spike with probabilitya during an excited period ofτ aftery spikes, then has refractory periodτ r ; � τ τ r ≥ At spikes (tofirst order inλ y ): � λx y = 1/τ log(1 a)(to give probability of a that spike | − − occurs byτ) � λx =aλ y (average spike rate) Analytic solution (first order inλ ;s,r ): ⇒ y →∞ ln [1 a] T˙ y x =aλ y ln − − → aλ τ � y �

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 30

Example 1 – Poisson copying processes

Analytic solution (first order inλ ;s,r ): y →∞ ln [1 a] T˙ y x =aλ y ln − − → aλ τ � y �

−1 2 λ (Ty→x +O(λ ))τ=1 12 y y � TE rate rises witha, λy = 1 λy = 0.1 λ 10 y = 0.01 because a resulting spike λy = 0.001 λ y = 0.0001 is more likely. 8 )) 2 y � TE rate rises withλ y , + O ( λ 6 y → x

( T because of more spiking − 1 y λ 4 events

2 � TE/aλy (per target

0 0 0.2 0.4 0.6 0.8 1 spike, see plot) rises as a λ y ↓ J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 31

Example 2 – Elevated spike rates

λy x =λ y | t t y λx y [xt s ,y t r ] =λ x y [t1 ] | − − | y 0.5 t1 >1 = 0.5 + 5 exp [ 50(ty 0.5) 2] 0

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 32

Example 2 – Elevated spike rates – Results y x

6

λ 4 x|y x λx , λ x | y

λ 2

0

8 C 6

,(0 t ) 4

y → x B

T 2 D E A 0

4 T˙ 2 ns ΔTt t 0 , Δ T nt ˙

T -2

-4

0 2 4 6 8 10 t

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 32

Example 2 – Elevated spike rates – Results

y Pathwise TE, at target spikes, x discontinuously: 6

λ 4 x|y x λ A Jumps up –λ >λ x x x y , λ | x | y λ 2 B Drops –λ x y <λ x | 0

8 C Jumps reduce – multiple C 6 target spikes raiseλ x ,(0 t ) 4

y → x B

T 2 D E A 0

4 T˙ 2 ns ΔTt t 0 , Δ T nt ˙

T -2

-4

0 2 4 6 8 10 t

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 32

Example 2 – Elevated spike rates – Results

y Pathwise TE, at target spikes, x discontinuously: 6

λ 4 x|y x λ A Jumps up –λ >λ x x x y , λ | x | y λ 2 B Drops –λ x y <λ x | 0

8 C Jumps reduce – multiple C 6 target spikes raiseλ x ,(0 t ) 4

y → x B

T 2 D E A Pathwise TE, over non-spiking 0 4 period, continuously: T˙ 2 ns ΔTt t 0

, Δ T D Decreases –λ >λ

nt x y x ˙ T -2 | -4 E Increases –λ x y <λ x 0 2 4 6 8 10 | t

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 41

Conclusion

� New formulation of TE developed for spike trains. � New estimator for TE for spike trains proposed: � Handles continuous time, appears to efficiently represent spike trains for TE; � Preliminary testing looks promising; � Will be included in JIDT/IDTxL soon. � Future work: � Further testing on synthetic examples � Compare properties to other estimators, i.e. time-binned � Apply to neural spike recordings

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 42

Acknowledgements

Thanks to: � Collaborators on projects contributing to this talk: R. E. Spinney, M. Prokopenko, L. Barnett, M. Harr´e,T. Bossomaier, M. Rubinov, M. Wibral and V. Priesemann � Australian Research Council via DECRA fellowship DE160100630 “Relating function of complex networks to structure using information theory” (2016-19) � UA-DAAD collaborative grant “Measuring neural information synthesis and its impairment” (2016-17) � USyd Faculty of Engineering & IT ECR grant “Measuring informationflow in event-driven complex systems” (2016)

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 43

Advertisements!

� Java Information Dynamics Toolkit (JIDT) – http://jlizier.github.io/jidt/

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 43

Advertisements!

� Java Information Dynamics Toolkit (JIDT) – http://jlizier.github.io/jidt/ � PhD scholarships available in my ARC DECRA project (complex networks + info theory)

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 43

Advertisements!

� Java Information Dynamics Toolkit (JIDT) – http://jlizier.github.io/jidt/ � PhD scholarships available in my ARC DECRA project (complex networks + info theory) � “Directed information measures in neuroscience”, edited by M. Wibral, R. Vicente and J. T. Lizier, Springer, Berlin, 2014. � “An Introduction to Transfer Entropy: Information Flow in Complex Systems”, Terry Bossomaier, Lionel Barnett, Michael Harr´e and Joseph T. Lizier, Springer, 2016..

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 44

References I

L. Barnett and T. Bossomaier. Transfer Entropy as a Log-Likelihood Ratio. Physical Review Letters, 109:138105+, Sept. 2012. doi: 10.1103/physrevlett.109.138105. URL http://dx.doi.org/10.1103/physrevlett.109.138105. L. Barnett and A. K. Seth. Detectability of Granger causality for subsampled continuous-time neurophysiological processes, 2016. URL http://arxiv.org/abs/1606.08644. Journal of Neuroscience Methods, in press. L. Barnett, J. T. Lizier, M. Harr´e,A. K. Seth, and T. Bossomaier. Informationflow in a kinetic ising model peaks in the disordered phase. Physical Review Letters, 111 (17), Oct. 2013. ISSN 0031-9007. doi: 10.1103/physrevlett.111.177203. URL http://dx.doi.org/10.1103/physrevlett.111.177203. J. Boedecker, O. Obst, J. T. Lizier, Mayer, and M. Asada. Information processing in echo state networks at the edge of chaos. Theory in Biosciences, 131(3):205–213, Sept. 2012. ISSN 1431-7613. doi: 10.1007/s12064-011-0146-8. URL http://dx.doi.org/10.1007/s12064-011-0146-8. S. Gao, G. V. Steeg, and A. Galstyan. Efficient estimation of for strongly dependent variables, Mar. 2015. URL http://arxiv.org/abs/1411.2003.

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 45

References II

G. G´omez-Herrero, W. Wu, K. Rutanen, M. Soriano, G. Pipa, and R. Vicente. Assessing coupling dynamics from an ensemble of time series. Entropy, 17(4): 1958–1970, Apr. 2015. doi: 10.3390/e17041958. URL http://arxiv.org/abs/1008.0539. S. Ito, M. E. Hansen, R. Heiland, A. Lumsdaine, A. M. Litke, and J. M. Beggs. Extending Transfer Entropy Improves Identification of Effective Connectivity in a Spiking Cortical Network Model. PLoS ONE, 6(11):e27431+, Nov. 2011. doi: 10.1371/journal.pone.0027431. URL http://dx.doi.org/10.1371/journal.pone.0027431. A. Kraskov, H. St¨ogbauer, and P. Grassberger. Estimating mutual information. Physical Review E, 69(6):066138+, June 2004. doi: 10.1103/physreve.69.066138. URL http://dx.doi.org/10.1103/physreve.69.066138. M. Lindner, R. Vicente, V. Priesemann, and M. Wibral. TRENTOOL: A Matlab open source toolbox to analyse informationflow in time series data with transfer entropy. BMC Neuroscience, 12(1):119+, Nov. 2011. ISSN 1471-2202. doi: 10.1186/1471-2202-12-119. URL http://dx.doi.org/10.1186/1471-2202-12-119.

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 46

References III

J. Lizier, M. Prokopenko, and A. Zomaya. A Framework for the Local Information Dynamics of Distributed Computation in Complex Systems. In M. Prokopenko, editor, Guided Self-Organization: Inception, volume 9 of Emergence, Complexity and Computation, pages 115–158. Springer Berlin Heidelberg, 2014. doi: 10.1007/978-3-642-53734-9 5. URL http://dx.doi.org/10.1007/978-3-642-53734-9_5\ . J. T. Lizier, M. Prokopenko, and A. Y. Zomaya. Local information transfer as a spatiotemporalfilter for complex systems. Physical Review E, 77(2):026110+, Feb. 2008. doi: 10.1103/physreve.77.026110. URL http://dx.doi.org/10.1103/physreve.77.026110. J. T. Lizier, M. Prokopenko, and A. Y. Zomaya. Information modification and particle collisions in distributed computation. Chaos, 20(3):037109+, 2010. doi: 10.1063/1.3486801. URL http://dx.doi.org/10.1063/1.3486801. J. T. Lizier, J. Heinzle, A. Horstmann, J.-D. Haynes, and M. Prokopenko. Multivariate information-theoretic measures reveal directed information structure and task relevant changes in fMRI connectivity. Journal of Computational Neuroscience, 30 (1):85–107, 2011. doi: 10.1007/s10827-010-0271-2. URL http://dx.doi.org/10.1007/s10827-010-0271-2.

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 47

References IV

J. T. Lizier, M. Prokopenko, and A. Y. Zomaya. Local measures of information storage in complex distributed computation. Information Sciences, 208:39–54, Nov. 2012. ISSN 00200255. doi: 10.1016/j.ins.2012.04.016. URL http://dx.doi.org/10.1016/j.ins.2012.04.016. V. Priesemann, J. Lizier, M. Wibral, E. T. Bullmore, O. Paulsen, P. Charlesworth, and M. S. Schr¨oter.Self-organization of information processing in developing neuronal networks. BMC Neuroscience, 16(Suppl 1):P221+, 2015. ISSN 1471-2202. doi: 10.1186/1471-2202-16-s1-p221. URL http://dx.doi.org/10.1186/1471-2202-16-s1-p221. T. Schreiber. Measuring Information Transfer. Physical Review Letters, 85(2): 461–464, July 2000. doi: 10.1103/physrevlett.85.461. URL http://dx.doi.org/10.1103/physrevlett.85.461. N. Timme, S. Ito, M. Myroshnychenko, F.-C. Yeh, E. Hiolski, P. Hottowy, and J. M. Beggs. Multiplex networks of cortical and hippocampal neurons revealed at different timescales. PLoS ONE, 9(12):e115764+, Dec. 2014. ISSN 1932-6203. doi: 10.1371/journal.pone.0115764. URL http://dx.doi.org/10.1371/journal.pone.0115764.

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 48

References V

N. M. Timme, S. Ito, M. Myroshnychenko, S. Nigam, M. Shimono, F.-C. Yeh, P. Hottowy, A. M. Litke, and J. M. Beggs. High-Degree neurons feed cortical computations. PLoS Comput Biol, 12(5):e1004858+, May 2016. doi: 10.1371/journal.pcbi.1004858. URL http://dx.doi.org/10.1371/journal.pcbi.1004858. R. Vicente, M. Wibral, M. Lindner, and G. Pipa. Transfer entropy–a model-free measure of effective connectivity for the neurosciences. Journal of Computational Neuroscience, 30(1):45–67, Feb. 2011. ISSN 1573-6873. doi: 10.1007/s10827-010-0262-3. URL http://dx.doi.org/10.1007/s10827-010-0262-3. M. Wibral, B. Rahm, M. Rieder, M. Lindner, R. Vicente, and J. Kaiser. Transfer entropy in magnetoencephalographic data: quantifying informationflow in cortical and cerebellar networks. Progress in Biophysics and Molecular Biology, 105(1-2): 80–97, Mar. 2011. ISSN 1873-1732. doi: 10.1016/j.pbiomolbio.2010.11.006. URL http://dx.doi.org/10.1016/j.pbiomolbio.2010.11.006. M. Wibral, J. T. Lizier, S. V¨ogler,V. Priesemann, and R. Galuske. Local active information storage as a tool to understand distributed neural information processing. Frontiers in Neuroinformatics, 8:1+, 2014a. ISSN 1662-5196. doi: 10.3389/fninf.2014.00001. URL http://dx.doi.org/10.3389/fninf.2014.00001.

J.T. Lizier TE in spike trains December 2016 Outline TE Cont’-time Spike trains Examples Estimator Close 49

References VI

M. Wibral, R. Vicente, and M. Lindner. Transfer entropy in neuroscience. In M. Wibral, R. Vicente, and J. T. Lizier, editors, Directed Information Measures in Neuroscience, Understanding Complex Systems, pages 3–36. Springer, Berlin/Heidelberg, 2014b. doi: 10.1007/978-3-642-54474-3 1. URL http://dx.doi.org/10.1007/978-3-642-54474-3_1. \ M. Wibral, J. T. Lizier, and V. Priesemann. Bits from brains for biologically inspired computing. Frontiers in Robotics and AI, 2(5), 2015. D. Zhou, Y. Zhang, Y. Xiao, and D. Cai. Analysis of sampling artifacts on the Granger causality analysis for topology extraction of neuronal dynamics. Frontiers in Computational Neuroscience, 8:75, 2014. doi: 10.3389/fncom.2014.00075. URL http://dx.doi.org/10.3389/fncom.2014.00075.

J.T. Lizier TE in spike trains December 2016