Neuronal Dynamics and Brain Connectivity

Michael Breakspear1'2 and Viktor K Jirsa3'4

1 School of Psychiatry, University of New South Wales, and The Black Dog Institute, Randwick, NSW, 2031, Australia.

2 School of Physics, University of Sydney, NSW 2006, Australia

3 Theoretical Neuroscience Group (CNRS), UMR6152 Mouvement & Perception, 13288 Marseille, France

4 Center for Complex Systems & Brain Sciences, Physics Department, Florida Atlantic University, Boca Raton FL33431, USA

The fluid nature of perceptual experience and the transient repetition of patterns in neurophysiological data attest to the dynamical character of neural activity. An approach to neuroscience that starts from this premise holds the potential to unite neuronal connectivity and brain activity by treating space and time in the same framework. That is the philosophy of this chapter. Our goals are threefold: Firstly, we discuss the formalism that is at the heart of all dynamical sciences, namely the evolution equation. Such an expression ties the temporal unfolding of a system to its physical properties and is typically a differential equation. The form of this equation depends on whether time and space are treated as continuous or discrete entities. Secondly, we aim to motivate, illustrate and provide definitions for the language of dynamical systems theory - that is, the theoretical framework that integrates analysis and geometry, hence permitting the qualitative understanding and quantitative analysis of evolution equations. To this end we provide a mini-encyclopedia of the basic terms of phase space analysis and a description of the basic bifurcations of dynamics systems. Our third aim is to provide a survey of single neuron and network models from a historical and pedagogical perspective. Here we first trace microscopic models from their birth in the 1950's showing how the neuronal firing properties can be understood as a bifurcation in the underlying phase space. Then we review the spatiotemporal network dynamics, which emerges as a function of the networks anatomical connectivity.

Introduction: Dynamics and the Brain

The firing of a neuron subsequent to an increase in synaptic input is a crucial neuronal event that is best understood from a dynamic system perspective. Whilst statistical techniques are crucial to the detection of synchrony and change in neuroscience data, the study of dynamics uniquely permits an understanding of their causes. "Evolution" equations - which embody a system's dynamics - form the basis of all major theories in the physical sciences, from Newton's F = ma to Schrodinger's wave equation and Maxwell's electromagnetic theory. There is no reason to believe that mathematical formalisms of neuronal dynamics won't eventually underpin and unify neuroscience. Indeed, over recent decades, dynamical formulations of brain activity have become sufficiently advanced to give rough outline to a "unified theory of brain dynamics". Such a theory will also inform studies of brain connectivity.

What is the origin of the brain's dynamic character? During the 20th century, extraordinary progress was made in elucidating basic neurophysio-logical processes and their role in neural phenomena such as neuronal firing and action potential propagation. Incorporating these processes into a set of evolution equations yielded quantitatively accurate spikes and thresholds, leading to the Nobel prize for Hodgkin and Huxley. These equations are based upon the physical properties of cell membranes and the ion currents passing through transmembrane proteins. Extending this theory from a patch of cell membrane to whole neurons and thence to populations of neurons in order to predict macroscopic signals such as the electroencephalogram (EEG) is a dominant focus in this field today. Linking neuronal dynamics to theories of cognition also remains a major goal.

Dynamics has a spatial as well as a temporal character and this makes it relevant to the subject of this handbook, brain connectivity. It can be argued that all forms of information processing in neuronal systems can be understood as particular types of spatiotemporal dynamics and their bifurcations. With this in mind, our primary objective is to provide a "ground-up" overview of the dynamical approach to neuroscience. We also aim to overview some of the recent developments in this field, such as those that establish a link between statistics and dynamics and proposals that provide putative network-based cognitive mechanisms with a biophysical underpinning. Attempts to employ dynamics to unify neurophysiological phenomena are also covered. Section 4, dealing with macroscopic spatiotemporal dynamics, implicitly incorporates connectivity by way of its joint treatment of space and time.

Section 1 provides an overview of the central concept of dynamics - the "evolution equation" - and reviews the variety of forms that it can assume. In Sect. 2, we overview the mathematical concepts required to understand the behavior of such equations, with an emphasis on a geometric approach. In doing so, we also show how many of the stochastic approaches more familiar to neuroscientists are specific forms of dynamical systems when they satisfy certain stability conditions. In Sect. 3, we provide a taxonomy of key neuronal models - that is, particular forms of neuronal evolution equations, with an emphasis on small scale systems. Section 4 then focuses on large scale neuronal dynamics. We argue that there is a one-to-one relationship between modes of information processing in neuronal systems and their spatiotemporal dynamics. Likewise, changes between such forms correspond directly with changes in the dynamics, mediated by a bifurcation or similar mechanism. The chapter concludes in Sect. 5 with some of the exciting recent developments in the field of neuronal dynamics and their putative links to other "hot topics" in neuroscience.

1 Evolution Equations: How to Make a Dynamical System

Evolution equations lie at the heart of dynamics. They state how a set of dynamical variables change in accordance with the underlying properties of the system they characterize. The most famous example of an evolution equation is Newton's "second law of mechanics" which describes the acceleration of an object as F = ma. More technically this is written as, dv (t) = F v dt m1 dt ()

where v(t) is the velocity of an object at position x(t). The left hand sides (LHSs) of these equations express the temporal derivative - the rate of change of a variable. The right hand sides (RHSs) link these changes to the properties of the system. The goal of calculus is to understand the resulting evolution of these variables as a function of time. In (1), it is possible to find an explicit solution for the evolution of x in terms of time,

where x(0) and v(0) are the 'initial conditions' of x and v. Equation (2) allows us to know the exact future position of an object given its current state and any applied constant force. We can see that as time increases the RHS of (2) will be dominated by the quadratic term, t2 so that an object subject to a constant force will be increasingly rapidly displaced. In more complex systems, as encountered in neuroscience, such explicit closed form solutions generally cannot be found. Moreover, their approximations are typically so cumbersome that understanding the nature of the dynamics from such algebraic equations is not straightforward. However, one may gain a deep understanding of the nature of a system's dynamics without relying only on algebraic solutions. This can be achieved through the geometric approach to dynamical systems, outlined in Sect. 2, which unifies algebraic analysis and topology.

The essential requirements for an evolution equation are a set of evolving variables which we denote Z(x,t) and a set of system parameters denoted a. The former represent the current states of properties such as transmembrane potentials, neuronal firing rates, extracellular field potentials, as they vary in time t and position x. The parameters a are those properties which can be considered as static or change very slowly in comparison to the dynamical variables Z. Nernst potentials, conduction velocities and ion channel time constants are typical neural parameters. All of these variables are then combined with a "differential operator" - which introduces the crucial factor of change - and an algebraic expression - which determines how this change relates to the properties of the system - to form an evolution equation.

We now progress through the various forms that such equations can assume, from the simplest to the more complex. Exemplar neuronal models of each system are given in Sect. 3. Further suggested reading is provided where appropriate.

1.1 Difference Maps: Discrete Time and Discrete Space

The simplest form of determining the future state of a dynamical system from its present state is through a difference map,

where t runs discretely as 0,1,2,... Note that the subscript a denotes the parameterization of F. The so-called "logistic" equation,

is a very well-known one-dimensional (scalar) example of a difference equation. The evolution of this relatively simple (quadratic) nonlinear equation is illustrated in Fig. 1.

The logistic map, and other simple algebraic forms, has been used extensively to elucidate basic, generic properties of nonlinear dynamics (Collet & Eckmann 1980, Cvitanovic 1984). They can exhibit a rich complexity even when the algebraic equations are simple as in (4). Examples of their use include elucidating the fundamental principles of chaotic dynamics (Gucken-heimer 1987) and the transition from regular to chaotic motions (Feigenbaum

Basic Non Linear Map
Fig. 1. (a) Logistic equation (4) with a = 1.64. (b) Resulting chaotic time series. Maps of this type have been used to study the basic properties of nonlinear systems. They have a less extensive role in modeling neural systems

1987). These concepts are discussed and illustrated in Sect. 2, below. An excellent introduction to this fascinating field is given by Baker & Gollub (1990).

The spatiotemporal properties of nonlinear dynamics can also be studied within this framework, through the use of coupled difference maps,

where xi denotes the spatial position of the i-th subsystem. The "coupling function" Hc introduces the activity from all other nodes into the dynamics of this node. The subscript c denotes the strength of the coupling influence and is traditionally normalized so that 0 < c < 1. Hence, if F embodies local neural dynamics, H incorporates the spatial characteristics of synaptic connectivity. Just as autonomous difference maps can be used to elucidate basic dynamical principles, coupled difference maps permit an understanding of the fundamentals of dynamic synchronization (Maistrenko et al. 1998). Often the influence of the local versus global dynamics can be linearly partitioned as,

A fascinating, early example of a coupled difference-map neural model is that of McCulloch & Pitts (1943) which we discuss in Sect. 3. However, because of the discrete nature of time in difference maps, and the fact that their study has been characterized by using very basic algebraic expressions, they rarely figure in biophysical models of neural systems. On the other hand, they have been used extensively to study the basic properties of high dimensional nonlinear dynamics (Kaneko 1997), including the onset of synchronization amongst two or more subsystems (Ashwin et al 1997). Put another way, they are mathematically pleasing because they permit an analytic understanding of the universal principles of dynamics and synchronization, but limited in their value to neuroscientists because their simplicity prohibits one from identifying the relative contribution of particular physiological processes to specific dynamical behaviors.

1.2 Ordinary Differential Equations: Continuous Time and Discrete Space

One obvious step towards physiological realism is to make time continuous! This can be achieved by exploring neural systems, whose evolution is governed by an ordinary differential equation (ODE),

where as above Z(t) is a set of dynamical variables. This is the form of equation for most traditional neuronal models such as the Hodgkin Huxley model in which case Z\ = V is the transmembrane potential and F takes the form,

The summation on the RHS is taken over all ion channels. For each ion species

represents the dynamics of the local voltage-dependent channel currents for each ion species. I represents synaptic currents which flow through ligand-gated channels or via an experimentally introduced electrode. As with difference equations, spatiotemporal dynamics are achieved by employing a coupling function Hc to introduce interdependence between systems, dZ (xi,t)

Hence, if (8) models the dynamics of a single neural system, (10) adds the interaction between two or more systems, creating a dynamic neural network. The ensemble is spatially discrete with a finite number N of subsystems so that the subscript indices i,j = 1, 2,..., N. As with coupled difference equations, it is often possible to bipartition the influence of the local and distant terms in (10) as

Such is the case when local recurrent axons and long-range afferents each project onto separate classes of neurons. In this case the long-range afferents are modeled as acting, through ligand-gated ion channels, via the synaptic currents. Hence, dZ (Xi,t] = E fion [Z (Xi,t)]+ I (Xi,t) , (12)

ion where the induced synaptic currents,

I (Xi,t) = EHc [Z (xj ,t - Tj )] + I external, (13)

introduce the afferent inputs from other systems Z(xj,t) that arrive after a time delay Tj - permitting finite speed axonal conduction. Because space is discrete, the time delays are also discretely distributed. Differential equations with time delays are treated thoroughly in the Chapter by Campbell. We only introduce them here because they are important in the conceptual transition from discrete to continuous space to which we now turn. A review of neuronal synchrony as modeled by coupled ODE's is provided by Breakspear (2004).

1.3 Integrodifferential Equations: Continuous Time and Continuous Space

In the case where we wish to model the inputs to region xi arising from a continuously distributed neuronal ensemble, we integrate the afferent induced currents (13) continuously over space, t

I (xi,t) = J J h (x - x')H[Z (x - x',t- t')] dt'dx', (14)

n where the spatial integration dx' is taken over the spatial domain Q of the neural system. Note that this also requires that we integrate over the (now) continuously distributed time delays, t'. We have also partitioned the coupling function into two parts, H and h. H determines which variables from any given system enter into the inter-system coupling, and how they do so. Typically H itself has two components, an "activation" function that converts local membrane potentials of the distant systems into firing rates p - which then propagate outwards - and synaptic kernels n which model how these propagating action potentials influence post-synaptic potentials as they arrive,

Specific forms of n and p are provided in Sect. 4. The coupling function h captures the spatial dependency of the strength of the afferent inputs. This function is also known as the 'synaptic footprint' (Coombes 2003) because it reflects the nature and density of synaptic connections as they change with the distance from their origin. Substituting the synaptic inputs (14) into the differential (12) and collapsing all local contributions into

ion we obtain t dZ t) = N(Z (x,t)) + / f h (x - x') H (Z (x - x', t- t')) dt'dx', (17)

dt n an integrodifferential equation. It may be considered a general form of a neural mass model because the exact nature of the synaptic "footprint", the activation function and the synaptic kernels remain unspecified. For example within this framework, it would be possible to use the precise form of the lateral inhibition that has been shown to allow sensory networks to be inherently tuned to particular spatial frequencies (Ratliff et al. 1969).

1.4 Partial Differential Equations: Continuous Time and Continuous Space but Constrained Connectivity

In some contexts, it may be preferable to express (17) with spatial and temporal derivatives only - rather than a combination of temporal derivatives with spatial and temporal integrations. Such differential representation is useful if the connectivity function h is sufficiently simple, smooth and translationally invariant, because then only a few spatial derivatives are needed to capture the connectivity. For example, given appropriate forms of h and H (see sect. 4) (17) can be rewritten as a partial differential equation of the form, d2Z (x,t) dZ (x,t) d2Z (x,t) N d, rrw, ,, , ,

Q2t + +b d2x' +cZ (x>t) = (d + rn)p (Z (x>t)) (18)

The coefficients a, b, c and d depend on system parameters such as conduction velocities and the synaptic footprint parameter a. Such an equation, expressing the evolution of neuronal systems, continuously in space and time, but with specific types of connectivity was first derived for macroscopic neuronal dynamics by Jirsa and Haken (1996, 1997) and Robinson et al. (1997). Pioneering work that led to this formulation started as early as the 1970s (Wilson 1973, Wilson & Cowan 1973, Nunez 1974, van Rotterdam et al. 1982). Comparing (11) and (18) we see that in the former, spatial coupling is introduced explicitly through the second term on the right hand side. In the latter, space enters the temporal dynamics through the (second order) spatial derivative on the left hand side. However, under certain conditions these two approaches can be equivalent.

1.5 Stochastic Differential Equations

All of the equations above capture the dynamical evolution of each of the values Z of the system of interest. In the case of a microscopic system, these variables may include transmembrane potential and ion channel conductance of a small patch of cell membrane. Evolution equations, such as (11) and (18) may also describe neural systems in the mesoscopic (<mm) and even macroscopic (~cm) scales. In such cases, the variables of interest represent mean values averaged over the appropriate scales. Such equations are hence known as mean field approximations. Before proceeding further, it is worth describing evolution equations which capture the dynamics for the entire probability distributions p(x, t) rather than just the mean. Such models allow for stochastic inputs to a system which nonetheless obeys deterministic rules. They take the form, dp (x,t) = d((f + s) p (x,t)) + w^d^

where s represents the (stochastic) inputs to the system and f is the form of the deterministic dynamics. As described in Harrison et al. (2005), the first term of the RHS describes the evolution of the probability distribution under the influence of the inputs s and the nature of the physiological system f. The second term describes the tendency of the distribution to disperse under the influence of the stochastic elements at rate w.

Whereas (11) and (18) are mean field equations, (19) is an example of a broader class of "neural field" equations, capturing the evolution of the entire probability distribution. There are a number of intriguing reasons to generalize neural evolution equations from mean field formulations to capture the evolution of the entire distributions. For example, consider two neural populations with the same mean membrane potentials, but where the second population has a larger variance. If the mean potential is below the threshold for firing, this difference in variance will imply that a greater proportion of neurons in the second population will be supra-threshold and hence firing (Fig. 2). These neurons, through local feedback, will in turn have a greater effect on the local mean membrane potential, driving it upwards or downwards - depending on whether the local feedback is excitatory or inhibitory. Put alternatively, modeling the entire distribution rather than just the mean permits the higher order moments of the neural states to interact (Harrison et al. 2005).

Solutions to (19) are possible only in very restricted cases. The development of numerical techniques - required to gain important insights into the dynamics - is a very active area of research. One important method in this vein relies upon a "modal decomposition", whereby the entire distribution is truncated to a few low order modes. The very restricted case, reducing such an equation to the first moment - the mean - returns us to the mean

Was this article helpful?

0 0
Leaving A Legacy

Leaving A Legacy

Learn how helping others benefits you and how you can begin accomplishing powerful goals in the process. Within this product you will learn the secrets behind having inner peace and inspiring others.

Get My Free Ebook


Post a comment