Measures based on mutual information are useful for analyzing functional connectivity patterns, obtained from neuronal spike trains, local field potential recordings or fMRI/PET voxel time series. However, functional connectivity allows only very limited insights into patterns of causal interactions within the network. Patterns of functional connectivity are statistical signatures of hidden causal processes occurring within and among specific and time-varying subsets of neurons and brain regions. The identification of which subsets are currently causally engaged in a given task requires the inclusion of and reference to a structural model in order to access effective connectivity patterns.

Effective connectivity attempts to reconstruct or "explain" recorded time-varying activity patterns in terms of underlying causal influences of one brain region over another (Friston, 1994; Buchel and Friston, 2000; Lee et al., 2003). This involves the combination of (essentially covariance-based) functional connectivity patterns with a structural system-level model of interconnectivity. A technique called "covariance structural equation modeling" is used to assign effective connection strengths to anatomical pathways that best match observed covariances in a given task (Mcintosh and Gonzalez-Lima, 1994; Hor-witz et al., 1999). Applied in different cognitive tasks, this technique allows the identification of significant differences in effective connectivity between a given set of brain regions, illustrating the time- and task-dependent nature of these patterns. Another approach called "dynamic causal modeling" (Friston et al., 2003; Stephan and Friston, 2007) uses a Bayesian framework to estimate and make inferences about interregional influences, explicitly in the context of experimental changes. A caveat concerning these and other approaches to extracting effective connectivity is that they usually require assumptions about the identity of participating brain regions and the patterns and direction of cross-regional influences between them.

Another approach to identifying highly interactive brain regions and their causal interactions involves the use of effective information, a novel measure of the degree to which two brain regions or systems causally influence each other (Tononi, 2001; Tononi and Sporns, 2003). Given a neural system that is partitioned into two complementary subsets, A and B, we obtain the effective information from A to B by imposing maximal entropy on all outputs of A. Under these conditions the amount of entropy that is shared between A and B must be due to causal effects of A on B, mediated by connections linking A and B. These connections can either be direct connections crossing the bipartition or indirect links via a surrounding neural context. The effective information from A to B may then be formulated as

Note that unlike MI(A,B), effective information may be non-symmetrical, i.e. EI(A ^ B) = EI(A ^ B), owing to non-symmetrical connection patterns. Furthermore, the estimation of effective information requires perturbations of units or connections.

It has been suggested that the integration of information is essential for the functioning of large-scale brain networks (e.g. Tononi et al., 1998; Tononi and Edelman, 1998). In considering information integration the notion of causality, or effectiveness, is crucial. A system that integrates information effectively must do so via actual causal interactions occurring within it. Mere statistical coincidences are insufficient to characterize the participating entities as truly integrated. Tononi and Sporns (2003) developed a measure for information integration (called $) based on effective information that captures the maximal amount of information that can be integrated within the system. For a given system or system subset S composed of subsets A and B, $ is defined as the capacity for information integration, or $(S), given by the value of EI(A ^ B) for the minimum information bipartition (MIB):

This measure allows the simultaneous quantification of information integration as well as the identification of all those system elements that participate in it. It can thus be used to delineate integrated functional clusters or networks of effective connectivity from among larger sets of brain regions. It is important to note that, following this definition, information integration takes place within complexes, defined as subsets of elements capable of integrating information that are not part of any subset having higher

Currently, this measure of information integration has only been tested in computer simulations of small model systems with varying anatomical architectures (Tononi and Sporns, 2003; Fig. 6). The results indicate that information integration is maximized by two main attributes of the anatomical connection pattern. First, each element maintains a different connection pattern, or connectional "finger-print", a property that strongly promotes regional functional specialization. Second, the pattern maintains global connectedness and ensures that a large amount of information can be exchanged across any bipartition of the network, which in turn promotes global functional integration. Simple models of the connectional organization of specific neural architectures, such as the thalamocortical system, are found to be well suited to information integration, while others, such as the cerebellum, are not. Neural architectures that are highly capable of integrating information are also associated with consciousness. Tononi (2004) has suggested that consciousness critically depends on the ability of a neural substrate to integrate information and is therefore tied to specific and quantifiable aspects of effective brain connectivity.

Several other methods for analyzing causal influences in the brain have been proposed, many of which utilize the temporal dynamics of the observed neural system to extract information about effective interactions, building on the fundamental fact that causes must precede effects in time (for a comparative computational study see Lungarella et al., 2007). Several methods are based on interpretations or adaptations of the concept of Granger causality (Granger, 1969), involving estimates of how much information a set of variables provides for the prediction of another. For example, Kaminski et al. (2001) develop an approach based on exploiting the directed transfer function between two neural signals. Granger causality has been applied to EEG data sets obtained from large-scale sensorimotor networks (Brovelli et al., 2004) as

Fig. 6. Information integration. All panels show structural connectivity (top), functional connectivity (middle) and effective connectivity (bottom) for networks of 8 units. (A) Network obtained after optimizing for $, resulting in a single complex with $ = 74 bits. Structural connections are heterogeneous and span the entire network, jointly maximizing functional segregation and functional integration. (B) Uniform network (loss of functional segregation), with greatly reduced $ = 20 bits. (C) Modular network (loss of functional integration), split into four identical complexes with $ = 20 bits each. Modified after Tononi and Sporns (2003), and Tononi (2004)

Fig. 6. Information integration. All panels show structural connectivity (top), functional connectivity (middle) and effective connectivity (bottom) for networks of 8 units. (A) Network obtained after optimizing for $, resulting in a single complex with $ = 74 bits. Structural connections are heterogeneous and span the entire network, jointly maximizing functional segregation and functional integration. (B) Uniform network (loss of functional segregation), with greatly reduced $ = 20 bits. (C) Modular network (loss of functional integration), split into four identical complexes with $ = 20 bits each. Modified after Tononi and Sporns (2003), and Tononi (2004)

well as fMRI time series (Roebroeck et al., 2005). Additional causality measures can discriminate between direct causality and effects mediated through extraneous system components (see also Liang et al., 2000). Bernasconi and Konig (1999) developed statistical measures that allowed the detection of directed dependences within temporal brain data sets. Schreiber (2000) defined a measure called transfer entropy, which is able to detect directed exchange of information between two systems by considering the effects of the state of one element on the state transition probabilities of the other element. This yields a non-symmetric measure of the effects of one element on the other, exploiting the entire system's temporal dynamics.

The experimental application of measures of effective connectivity presents a number of difficult problems. Structural equation modeling and dynamic causal modeling are sensitive to choices made about the underlying structural model, while causal measures based time series analysis are prone to issues surrounding sample sizes or systematic sampling biases. Effective information, as defined above, shares some of these problems, in addition to issues related to its use of systematic perturbations, which are likely to be difficult to estimate in real neuronal systems. These difficulties notwithstanding, some promising avenues towards extracting effective connectivity from brain data have recently been pursued. The combination of transcranial magnetic stimulation (TMS) with functional neuroimaging, for the first time, allows the quantification of effects of localized perturbations on extended brain networks engaged in the performance of specific tasks (Paus, 1999; Pascual-Leone et al., 2000). Using a combination of TMS and high-density electroencephalography Massimi et al. (2005) reported a striking reduction in the extent of cortical effective connectivity during non-REM sleep compared to waking. This state-dependent difference is recorded in the same individual, presumably existing within an identical structural connectivity pattern. A major implication of this breakdown of effective connectivity during non-REM sleep is that it points towards a crucial role of causal influences between brain regions associated with information integration as a neural basis for consciousness (Tononi, 2004).

Was this article helpful?

## Post a comment