Universal Estimation of Directed Information,” in Proc

Total Page:16

File Type:pdf, Size:1020Kb

Universal Estimation of Directed Information,” in Proc 1 Universal Estimation of Directed Information Jiantao Jiao, Student Member, IEEE, Haim H. Permuter, Member, IEEE, Lei Zhao, Young-Han Kim, Senior Member, IEEE, and Tsachy Weissman, Fellow, IEEE Abstract—Four estimators of the directed information rate be- I. INTRODUCTION tween a pair of jointly stationary ergodic finite-alphabet processes are proposed, based on universal probability assignments. The IRST introduced by Marko [1] and Massey [2], directed first one is a Shannon–McMillan–Breiman type estimator, similar information arises as a natural counterpart of mutual to those used by Verd´u(2005) and Cai, Kulkarni, and Verd´u F (2006) for estimation of other information measures. We show the information for channel capacity when causal feedback from almost sure and L1 convergence properties of the estimator for the receiver to the sender is present. In [3] and [4], Kramer any underlying universal probability assignment. The other three extended the use of directed information to discrete memory- estimators map universal probability assignments to different less networks with feedback, including the two-way channel functionals, each exhibiting relative merits such as smoothness, and the multiple access channel. Tatikonda and Mitter [5] used nonnegativity, and boundedness. We establish the consistency of directed information spectrum to establish a general feedback these estimators in almost sure and L1 senses, and derive near- optimal rates of convergence in the minimax sense under mild channel coding theorem for channels with memory. For a class conditions. These estimators carry over directly to estimating of stationary channels with feedback, where the output is a other information measures of stationary ergodic finite-alphabet function of the current and past m inputs and channel noise, processes, such as entropy rate and mutual information rate, with Kim [6] proved that the feedback capacity is equal to the limit near-optimal performance and provide alternatives to classical approaches in the existing literature. Guided by these theoretical of the maximum normalized directed information from the results, the proposed estimators are implemented using the input to the output. Permuter, Weissman, and Goldsmith [7] context-tree weighting algorithm as the universal probability as- considered the capacity of discrete-time finite-state channels signment. Experiments on synthetic and real data are presented, with feedback where the feedback is a time-invariant func- demonstrating the potential of the proposed schemes in practice tion of the output. Under mild conditions, they showed that and the utility of directed information estimation in detecting and measuring causal influence and delay. the capacity is again the limit of the maximum normalized directed information. Recently, Permuter, Kim, and Weissman Index Terms—Causal influence, context-tree weighting, di- [8] showed that directed information plays an important role rected information, rate of convergence, universal probability assignment in portfolio theory, data compression, and hypothesis testing under causality constraints. Beyond information theory, directed information is a valu- able tool in biology, for it provides an alternative to the notion of Granger causality [9], which has been perhaps the most widely-used means of identifying causal influence between two random processes. For example, Mathai, Martins, and Manuscript received Month 00, 0000; revised Month 00, 0000; accepted Shapiro [10] used directed information to identify pairwise Month 00, 0000. Date of current version Month 00, 0000. This work was supported in part by the Center for Science of Information (CSoI), an NSF influence in gene networks. Similarly, Rao, Hero, States, and Science and Technology Center, under grant agreement CCF-0939370, the Engel [11] used directed information to test the direction of arXiv:1201.2334v4 [cs.IT] 30 May 2013 US–Israel Binational Science Foundation (BSF) Grant 2008402, NSF Grant influence in gene networks. CCF-0939370, and the Air Force Office of Scientific Research (AFOSR) through Grant FA9550-10-1-0124. Haim H. Permuter was supported in part Since directed information has significance in various fields, by the Marie Curie Reintegration Fellowship. The material in this paper was it is of both theoretical and practical importance to develop presented in part at the 2010 IEEE International Symposium on Information efficient methods of estimating it. The problem of estimating Theory, Austin, TX, and the 2012 IEEE International Symposium on Infor- mation Theory, Cambridge, MA. information measures, such as entropy, relative entropy and Jiantao Jiao is with the Department of Electrical Engineering, Stanford mutual information, has been extensively studied in the liter- University, Stanford, CA 94305, USA (e-mail: [email protected]). ature. Verd´u[12] gave an overview of universal estimation Haim Permuter is with the Department of Electrical and Computer Engi- neering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel (e- of information measures. Wyner and Ziv [13] applied the mail: [email protected]). idea of Lempel–Ziv parsing to estimate entropy rate, which Lei Zhao was with the Department of Electrical Engineering, Stanford converges in probability for all stationary ergodic processes. University, Stanford CA, USA. He is now with Jump Operations, Chicago, IL 60654, USA (e-mail: [email protected]). Ziv and Merhav [14] used Lempel–Ziv parsing to estimate Young-Han Kim is with the Department of Electrical and Computer relative entropy (Kullback–Leibler divergence) and established Engineering, University of California, San Diego, La Jolla, CA 92093, USA consistency under the assumption that the observations are (e-mail: [email protected]). Tsachy Weissman is with the Department of Electrical Engineering, Stan- generated by independent Markov sources. Cai, Kulkarni, and ford University, Stanford CA 94305, USA (e-mail: [email protected]). Verd´u[15] proposed two universal relative entropy estimators Communicated by I. Kontoyiannis, Associate Editor for Shannon Theory. for finite-alphabet sources, one based on the Burrows–Wheeler Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. transform (BWT) [16] and the other based on the context-tree Digital Object Identifier 10.1109/TIT.2013.0000000 weighting (CTW) algorithm [17]. The BWT-based estimator 2 n n was applied in universal entropy estimation by Cai, Kulkarni, X and (x1, x2,...,xn) as x . Calligraphic letters , ,... and Verd´u[18], while the CTW-based one was applied in denote alphabets of X, Y, . ., and denotes the cardinalityX Y universal erasure entropy estimation by Yu and Verd´u[19]. of . Boldface letters X, Y,... denote|X | stochastic processes, For the problem of estimating directed information, Quinn, andX throughout this paper, they are finite-alphabet. Coleman, Kiyavashi, and Hatspoulous [20] developed an es- Given a probability law P , P (xi) = P Xi = xi denotes timator to infer causality in an ensemble of neural spike train { i } i−1 the probability mass function (pmf) of X and P (xi x ) recordings. Assuming a parametric generalized linear model i−1 i−1| denotes the conditional pmf of Xi given X = x , i.e., and stationary ergodic Markov processes, they established { } with slight abuse of notation, xi here is a dummy variable and strong consistency results. Compared to [20], Zhao, Kim, i−1 P (xi x ) is an element of ( ), the probability simplex Permuter, and Weissman [21] focused on universal methods on |, representing the saidM conditionalX pmf. Accordingly, for arbitrary stationary ergodic processes with finite alphabet P (xXXi−1) denotes the conditional pmf P (x xi−1) evalu- i| i| and showed their L1 consistencies. ated for the random sequence Xi−1, which is an ( )- As an improvement and extension of [21], the main con- i−1 M X valued random vector, while P (Xi X ) is the random vari- tribution of this paper is a general framework for estimating | i−1 able denoting the Xi-th component of P (xi X ). Through- information measures of stationary ergodic finite-alphabet pro- out this paper, log( ) is base 2 and ln( ) is| base e. cesses, using “single-letter” information-theoretic functionals. · · Although our methods can be applied in estimating a number of information measures, for concreteness and relevance to emerging applications we focus on estimating the directed information rate between a pair of jointly stationary ergodic A. Directed Information finite-alphabet processes. The first proposed estimator is adapted from the universal Given a pair of random sequences Xn and Y n, the directed relative entropy estimator in [15] using the CTW algorithm, information from Xn to Y n is defined as and we provide a refined analysis yielding strong consistency n results. We further propose three additional estimators in a I(Xn Y n) , I(Xi; Y Y i−1) (1) → i| unified framework, present both weak and strong consistency i=1 X results, and establish near-optimal rates of convergence under = H(Y n) H(Y n Xn), (2) mild conditions. We then employ our estimators on both sim- − k where n n is the causally conditional entropy [3], ulated and real data, showing their effectiveness in measuring H(Y X ) defined as k channel delays and causal influences between different pro- n cesses. In particular, we use these estimators on the daily stock H(Y n Xn) , H(Y Y i−1,Xi). (3) market data from 1990 to 2011 to observe a significant level k i| i=1 of causal influence from the Dow Jones Industrial Average to X the Hang Seng Index, but relatively low causal influence in Compared to mutual information the reverse direction. I(Xn; Y n)= H(Y n) H(Y n Xn), (4) The rest of the paper is organized as follows. Section II − | reviews preliminaries on directed information, universal prob- directed information in (2) has the causally conditional en- ability assignments, and the context-tree weighting algorithm. tropy in place of the conditional entropy. Thus, unlike mu- Section III presents our proposed estimators and their basic tual information, directed information is not symmetric, i.e., properties.
Recommended publications
  • An Information-Theoretic Perspective on Credit Assignment in Reinforcement Learning
    An Information-Theoretic Perspective on Credit Assignment in Reinforcement Learning Dilip Arumugam Peter Henderson Department of Computer Science Department of Computer Science Stanford University Stanford University [email protected] [email protected] Pierre-Luc Bacon Mila - University of Montreal [email protected] Abstract How do we formalize the challenge of credit assignment in reinforcement learn- ing? Common intuition would draw attention to reward sparsity as a key contribu- tor to difficult credit assignment and traditional heuristics would look to temporal recency for the solution, calling upon the classic eligibility trace. We posit that it is not the sparsity of the reward itself that causes difficulty in credit assignment, but rather the information sparsity. We propose to use information theory to define this notion, which we then use to characterize when credit assignment is an ob- stacle to efficient learning. With this perspective, we outline several information- theoretic mechanisms for measuring credit under a fixed behavior policy, high- lighting the potential of information theory as a key tool towards provably-efficient credit assignment. 1 Introduction The credit assignment problem in reinforcement learning [Minsky, 1961, Sutton, 1985, 1988] is concerned with identifying the contribution of past actions on observed future outcomes. Of par- ticular interest to the reinforcement-learning (RL) problem [Sutton and Barto, 1998] are observed future returns and the value function, which quantitatively answers “how does choosing an action a in state s affect future return?” Indeed, given the challenge of sample-efficient RL in long-horizon, sparse-reward tasks, many approaches have been developed to help alleviate the burdens of credit assignment [Sutton, 1985, 1988, Sutton and Barto, 1998, Singh and Sutton, 1996, Precup et al., 2000, Riedmiller et al., 2018, Harutyunyan et al., 2019, Hung et al., 2019, Arjona-Medina et al., 2019, Ferret et al., 2019, Trott et al., 2019, van Hasselt et al., 2020].
    [Show full text]
  • Task-Driven Estimation and Control Via Information Bottlenecks
    Task-Driven Estimation and Control via Information Bottlenecks Vincent Pacelli and Anirudha Majumdar Abstract— Our goal is to develop a principled and general algorithmic framework for task-driven estimation and control for robotic systems. State-of-the-art approaches for controlling robotic systems typically rely heavily on accurately estimating the full state of the robot (e.g., a running robot might estimate joint angles and velocities, torso state, and position relative to a goal). However, full state representations are often excessively rich for the specific task at hand and can lead to significant Fig. 1. A schematic of our technical approach. We seek to synthesize computational inefficiency and brittleness to errors in state (offline) a minimalistic set of task-relevant variables (TRVs) x~t that create x u estimation. In contrast, we present an approach that eschews a bottleneck between the full state t and the control input t. These TRVs π such rich representations and seeks to create task-driven repre- are estimated online in order to apply the policy t. We demonstrate our approach on a spring-loaded inverted pendulum model whose goal is to sentations. The key technical insight is to leverage the theory of run to a target location. Our approach automatically synthesizes a one- information bottlenecks to formalize the notion of a “task-driven dimensional TRV x~ sufficient for achieving this task. representation” in terms of information theoretic quantities that t measure the minimality of a representation. We propose novel estimated. Second, since only a few prominent variables need iterative algorithms for automatically synthesizing (offline) a to be estimated, fewer sources of measurement uncertainty task-driven representation (given in terms of a set of task- relevant variables (TRVs)) and a performant control policy that result in a more robust policy.
    [Show full text]
  • Interpretations of Directed Information in Portfolio Theory, Data
    1 Interpretations of Directed Information in Portfolio Theory, Data Compression, and Hypothesis Testing Haim H. Permuter, Young-Han Kim, and Tsachy Weissman Abstract We investigate the role of Massey’s directed information in portfolio theory, data compression, and statistics with causality constraints. In particular, we show that directed information is an upper bound on the increment in growth rates of optimal portfolios in a stock market due to causal side information. This upper bound is tight for gambling in a horse race, which is an extreme case of stock markets. Directed information also characterizes the value of causal side information in instantaneous compression and quantifies the benefit of causal inference in joint compression of two stochastic processes. In hypothesis testing, directed information evaluates the best error exponent for testing whether a random process Y causally influences another process X or not. These results give a natural interpretation of directed n n n information I(Y → X ) as the amount of information that a random sequence Y = (Y1,Y2,...,Yn) causally n provides about another random sequence X =(X1, X2,...,Xn). A new measure, directed lautum information, is also introduced and interpreted in portfolio theory, data compression, and hypothesis testing. Index Terms Causal conditioning, causal side information, directed information, hypothesis testing, instantaneous compression, Kelly gambling, Lautum information, portfolio theory. NTRODUCTION arXiv:0912.4872v1 [cs.IT] 24 Dec 2009 I. I Mutual information I(X; Y ) between two random variables X and Y arises as the canonical answer to a variety of questions in science and engineering. Most notably, Shannon [1] showed that the capacity C, the maximal data rate for reliable communication, of a discrete memoryless channel p(y|x) with input X and output Y is given by C = max I(X; Y ).
    [Show full text]
  • Analogy Between Gambling and Measurements-Based Work Extraction
    IRWIN AND JOAN JACOBS CENTER FOR COMMUNICATION AND INFORMATION TECHNOLOGIES Analogy Between Gambling and Measurements-Based Work Extraction Dror A. Vinkler, Haim H. Permuter and Neri Merhav CCIT Report #857 April 2014 Electronics Computers DEPARTMENT OF ELECTRICAL ENGINEERING TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY, HAIFA 32000, ISRAEL Communications 1 Analogy Between Gambling and Measurement-Based Work Extraction Dror A. Vinkler, Haim H. Permuter and Neri Merhav Abstract In information theory, one area of interest is gambling, where mutual information characterizes the maximal gain in wealth growth rate due to knowledge of side information; the betting strategy that achieves this maximum is named the Kelly strategy. In the field of physics, it was recently shown that mutual information can characterize the maximal amount of work that can be extracted from a single heat bath using measurement-based control protocols, i.e., using “information engines”. However, to the best of our knowledge, no relation between gambling and information engines has been presented before. In this paper, we briefly review the two concepts and then demonstrate an analogy between gambling, where bits are converted into wealth, and information engines, where bits representing measurements are converted into energy. From this analogy follows an extension of gambling to the continuous-valued case, which is shown to be useful for investments in currency exchange rates or in the stock market using options. Moreover, the analogy enables us to use well-known methods and results from one field to solve problems in the other. We present three such cases: maximum work extraction when the probability distributions governing the system and measurements are unknown, work extraction when some energy is lost in each cycle, e.g., due to friction, and an analysis of systems with memory.
    [Show full text]
  • Universal Estimation of Directed Information Lei Zhao, Haim Permuter, Young-Han Kim, and Tsachy Weissman
    ISIT 2010, Austin, Texas, U.S.A., June 13 - 18, 2010 Universal Estimation of Directed Information Lei Zhao, Haim Permuter, Young-Han Kim, and Tsachy Weissman Abstract— In this paper, we develop a universal algorithm to Notation: We use capital letter X to denote a random estimate Massey’s directed information for stationary ergodic variable and use small letter x to denote the corresponding processes. The sequential probability assignment induced by a realization or constant. Calligraphic letter X denotes the universal source code plays the critical role in the estimation. |X | In particular, we use context tree weighting to implement the alphabet of X and denotes the cardinality of the alphabet. algorithm. Some numerical results are provided to illustrate the II. PRELIMINARIES performance of the proposed algorithm. We first give the mathematical definitions of directed infor- mation and causally conditional entropy, and then discuss the I. INTRODUCTION relation between universal sequential probability assignment First introduced by Massey in [1], directed information and universal source coding. arises as a natural counterpart of mutual information for chan- nel capacity when feedback is present. In [2] and [3], Kramer A. Directed information extended the use of directed information to discrete memory- Directed information from Xn to Y n is defined as less networks with feedback, including the two-way channel n → n n − n|| n and the multiple access channel. For a class of stationary I (X Y )=H(Y ) H(Y X ), (1) channels with feedback, where the output is a function of the where H(Y n||Xn) is the causally conditional entropy [2], current and past m inputs and channel noise, Kim [4] proved defined as that the feedback capacity is equal to the limit of the supremum n n n i−1 i of the normalized directed information from the input to the H(Y ||X )= H(Yi|Y ,X ).
    [Show full text]
  • Information Theoretic Causal Effect Quantification
    entropy Article Information Theoretic Causal Effect Quantification Aleksander Wieczorek ∗ and Volker Roth Department of Mathematics and Computer Science, University of Basel, CH-4051 Basel, Switzerland; [email protected] * Correspondence: [email protected] Received: 31 August 2019; Accepted: 30 September 2019 ; Published: 5 October 2019 Abstract: Modelling causal relationships has become popular across various disciplines. Most common frameworks for causality are the Pearlian causal directed acyclic graphs (DAGs) and the Neyman-Rubin potential outcome framework. In this paper, we propose an information theoretic framework for causal effect quantification. To this end, we formulate a two step causal deduction procedure in the Pearl and Rubin frameworks and introduce its equivalent which uses information theoretic terms only. The first step of the procedure consists of ensuring no confounding or finding an adjustment set with directed information. In the second step, the causal effect is quantified. We subsequently unify previous definitions of directed information present in the literature and clarify the confusion surrounding them. We also motivate using chain graphs for directed information in time series and extend our approach to chain graphs. The proposed approach serves as a translation between causality modelling and information theory. Keywords: directed information; conditional mutual information; directed mutual information; confounding; causal effect; back-door criterion; average treatment effect; potential outcomes; time series; chain graph 1. Introduction Causality modelling has recently gained popularity in machine learning. Time series, graphical models, deep generative models and many others have been considered in the context of identifying causal relationships. One hopes that by understanding causal mechanisms governing the systems in question, better results in many application areas can be obtained, varying from biomedical [1,2], climate related [3] to information technology (IT) [4], financial [5] and economic [6,7] data.
    [Show full text]
  • Capacity of Continuous Channels with Memory Via Directed Information Neural Estimator
    Capacity of Continuous Channels with Memory via Directed Information Neural Estimator Ziv Aharoni Dor Tsur Ziv Goldfeld Haim H. Permuter Ben Gurion University Ben Gurion University Cornell University Ben Gurion University [email protected] [email protected] [email protected] [email protected] Abstract—Calculating the capacity (with or without feedback) [24] for further details). Built on (3), for stationary processes, of channels with memory and continuous alphabets is a challeng- the DI rate is defined as ing task. It requires optimizing the directed information (DI) rate 1 over all channel input distributions. The objective is a multi- I(X → Y) := lim I(Xn → Y n). (4) letter expression, whose analytic solution is only known for a n→∞ n few specific cases. When no analytic solution is present or the As shown in [8], when feedback is not present, the optimiza- channel model is unknown, there is no unified framework for tion problem (2) (which amounts to optimizing over P n calculating or even approximating capacity. This work proposes X a novel capacity estimation algorithm that treats the channel rather than PXnkY n ) coincides with (1). Thus, DI provides a as a ‘black-box’, both when feedback is or is not present. The unified framework for representing both FF and FB capacities. algorithm has two main ingredients: (i) a neural distribution Computing CFF and CFB requires solving a multi-letter transformer (NDT) model that shapes a noise variable into the optimization problem. Closed form solutions to this chal- channel input distribution, which we are able to sample, and (ii) the DI neural estimator (DINE) that estimates the communication lenging task are known only in several special cases.
    [Show full text]
  • Information Structures for Causally Explainable Decisions
    entropy Article Information Structures for Causally Explainable Decisions Louis Anthony Cox, Jr. Department of Business Analytics, University of Colorado School of Business, and MoirAI, 503 N. Franklin Street, Denver, CO 80218, USA; [email protected] Abstract: For an AI agent to make trustworthy decision recommendations under uncertainty on be- half of human principals, it should be able to explain why its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link potential courses of action to resulting outcome probabilities. They reflect an understanding of possible actions, preferred outcomes, the effects of action on outcome probabilities, and acceptable risks and trade-offs—the standard ingredients of normative theories of decision-making under uncertainty, such as expected utility theory. Competent AI advisory systems should also notice changes that might affect a user’s plans and goals. In response, they should apply both learned patterns for quick response (analogous to fast, intuitive “System 1” decision-making in human psychology) and also slower causal inference and simulation, decision optimization, and planning algorithms (analogous to deliberative “System 2” decision-making in human psychology) to decide how best to respond to changing conditions. Concepts of conditional independence, conditional probability tables (CPTs) or models, causality, heuristic search for optimal plans, uncertainty reduction, and value of information (VoI) provide a rich, principled framework for recognizing and responding to relevant changes and features of decision problems via both learned and calculated responses. This paper reviews how these and related concepts can be used to identify probabilistic causal dependencies among variables, detect changes that matter for achieving goals, represent them efficiently to support responses on multiple time scales, and evaluate and update causal models and plans in light of new data.
    [Show full text]
  • A Unified Bellman Equation for Causal Information and Value in Markov
    A Unified Bellman Equation for Causal Information and Value in Markov Decision Processes Stas Tiomkin 1 Naftali Tishby 1 2 Abstract interaction patterns, (infinite state and action trajectories) The interaction between an artificial agent and its define the typical behavior of an organism in a given en- environment is bi-directional. The agent extracts vironment. This typical behavior is crucial for the design relevant information from the environment, and and analysis of intelligent systems. In this work we derive affects the environment by its actions in return to typical behavior within the formalism of a reinforcement accumulate high expected reward. Standard rein- learning (RL) model subject to information-theoretic con- forcement learning (RL) deals with the expected straints. reward maximization. However, there are always In the standard RL model, (Sutton & Barto, 1998), an arti- information-theoretic limitations that restrict the ficial organism generates an optimal policy through an in- expected reward, which are not properly consid- teraction with the environment. Typically, the optimality ered by the standard RL. is taken with regard to a reward, such as energy, money, In this work we consider RL objectives with social network ’likes/dislikes’, time etc. Intriguingly, the information-theoretic limitations. For the first reward-associated value function, (an average accumulated time we derive a Bellman-type recursive equa- reward), possesses the same property for different types of tion for the causal information between the envi- rewards - the optimal value function is a Lyapunov function, ronment and the agent, which is combined plau- (Perkins & Barto, 2002), a generalized energy function. sibly with the Bellman recursion for the value A principled way to solve RL (as well as Optimal Con- function.
    [Show full text]
  • Transfer-Entropy-Regularized Markov Decision Processes Takashi Tanaka1 Henrik Sandberg2 Mikael Skoglund3
    1 Transfer-Entropy-Regularized Markov Decision Processes Takashi Tanaka1 Henrik Sandberg2 Mikael Skoglund3 Abstract—We consider the framework of transfer-entropy- can be used as a proxy for the data rate on communication regularized Markov Decision Process (TERMDP) in which the channels, and thus solving TERMDP provides a fundamental weighted sum of the classical state-dependent cost and the performance limitation of such systems. The second applica- transfer entropy from the state random process to the control random process is minimized. Although TERMDPs are generally tion of TERMDP is non-equilibrium thermodynamics. There formulated as nonconvex optimization problems, we derive an has been renewed interests in the generalized second law of analytical necessary optimality condition expressed as a finite set thermodynamics, in which transfer entropy arises as a key of nonlinear equations, based on which an iterative forward- concept [10]. TERMDP in this context can be interpreted as backward computational procedure similar to the Arimoto- the problem of operating thermal engines at a nonzero work Blahut algorithm is proposed. It is shown that every limit point of the sequence generated by the proposed algorithm is a stationary rate near the fundamental limitation of the second law of point of the TERMDP. Applications of TERMDPs are discussed thermodynamics. in the context of networked control systems theory and non- In contrast to the standard MDP [11], TERMDP penalizes equilibrium thermodynamics. The proposed algorithm is applied the information flow from the underlying state random pro- to an information-constrained maze navigation problem, whereby cess to the control random process. Consequently, TERMDP we study how the price of information qualitatively alters the optimal decision polices.
    [Show full text]
  • Directed Information for Complex Network Analysis from Multivariate Time Series
    DIRECTED INFORMATION FOR COMPLEX NETWORK ANALYSIS FROM MULTIVARIATE TIME SERIES by Ying Liu A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Electrical Engineering 2012 ABSTRACT DIRECTED INFORMATION FOR COMPLEX NETWORK ANALYSIS FROM MULTIVARIATE TIME SERIES by Ying Liu Complex networks, ranging from gene regulatory networks in biology to social networks in sociology, have received growing attention from the scientific community. The analysis of complex networks employs techniques from graph theory, machine learning and signal pro- cessing. In recent years, complex network analysis tools have been applied to neuroscience and neuroimaging studies to have a better understanding of the human brain. In this the- sis, we focus on inferring and analyzing the complex functional brain networks underlying multichannel electroencephalogram (EEG) recordings. Understanding this complex network requires the development of a measure to quantify the relationship between multivariate time series, algorithms to reconstruct the network based on the pairwise relationships, and identification of functional modules within the network. Functional and effective connectivity are two widely studied approaches to quantify the connectivity between two recordings. Unlike functional connectivity which only quantifies the statistical dependencies between two processes by measures such as cross correlation, phase synchrony, and mutual information (MI), effective connectivity quantifies the influ- ence one node exerts on another node. Directed information (DI) measure is one of the approaches that has been recently proposed to capture the causal relationships between two time series. Two major challenges remain with the application of DI to multivariate data, which include the computational complexity of computing DI with increasing signal length and the accuracy of estimation from limited realizations of the data.
    [Show full text]
  • Echo State Network Models for Nonlinear Granger Causality
    bioRxiv preprint doi: https://doi.org/10.1101/651679; this version posted June 25, 2019. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. 1 Echo State Network models for nonlinear Granger causality Andrea Duggento∗, Maria Guerrisi∗, Nicola Toschiy, ∗Department of Biomedicine and Prevention, University of Rome Tor Vergata, Rome, Italy yDepartment of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Boston, MA, USA Abstract—While Granger Causality (GC) has been often em- for reconstructing neither the nonlinear components of neural ployed in network neuroscience, most GC applications are based coupling, nor the multiple nonlinearities and time-scales which on linear multivariate autoregressive (MVAR) models. However, concur to generating the signals. Instead, neural network (NN) real-life systems like biological networks exhibit notable non- linear behavior, hence undermining the validity of MVAR-based models more flexibly account for multiscale nonlinear dynam- GC (MVAR-GC). Current nonlinear GC estimators only cater for ics and interactions [10]. For example, multi-layer perceptions additive nonlinearities or, alternatively, are based on recurrent [11] or neural networks with non-uniform embeddings [12] neural networks (RNN) or Long short-term memory (LSTM) have been used to introduce nonlinear estimation capabili- networks, which present considerable training difficulties and tai- ties which also include “extended” GC [13] and wavelet- loring needs. We define a novel approach to estimating nonlinear, directed within-network interactions through a RNN class termed based approaches [14]. Also, recent preliminary work has echo-state networks (ESN), where training is replaced by random employed deep learning to estimate bivariate GC interactions initialization of an internal basis based on orthonormal matrices.
    [Show full text]