complete network dynamics a long history in the field of neural networks. The mean field u(t) is generally defined as the statistical expectation value E of a particular state variable. Two mean field approaches exist based on two opposing views of neuronal coding, but of course with many interim shades. The first view holds that the firing rate of a neural mass is relevant for neural information processing. The dissenting view posits that the information is encoded in the interactions among spikes and hence spike correlations must not be ignored (for detailed discussions of neuronal encoding see Koch 1999). In large scale models, neural mass action is mostly expressed by mean fields of firing rate, though also considerable evidence exists that single cells may fire spikes at predictable intervals as long as 200msec with a precision of 1msec (Abeles et al. 1993). The latter is the key observation leading to the theory of synfire chains for cortical processing (Abeles 1991). As of today, it is not clear to what degree the neural system uses firing rate or spike coding mechanisms. Experimental evidence exists for both and accumulates with every day (Koch 1999). In the following we elaborate on the import of both neural coding mechanisms to the field of large scale modeling.

Generally speaking, if the coupling is high enough and the parameter dispersion is sufficiently small, the neurons in the neural mass evolve in time close to each other within phase space (and hence to the mean field), or in other words are synchronized. Note however, that there are exceptions in the network dynamics literature known as oscillator death (see also Campbell 2007), in which the neural mass action becomes zero due to too strong coupling. A synchronized neural dynamics will play a lesser role for extended periods of time during which a large scale synchronization is more likely to indicate pathological network activity such as epilepsy (see Milton et al 2007, Ferree and Nunez 2007). However, the understanding of the conditions leading to the emergence of synchronization will likely be important to understanding the neurocognitive processes such as feature binding (Gray and Singer 1989; Crick and Koch 1990) and multisensory integration (Von Stein et al 1999; Treisman 1996). In fact, the onset of coherent oscillatory activity has been interpreted to be fundamental for the formation of higher-order percepts (Freeman and Skarda 1985; Bressler 1990). In the opposite case for small coupling and greater noise strength, the elements of the population move incoherently and eventually their positions average out. Here the asymptotic dynamics of the mean field is mostly characterized by the fluctuations and the mean firing rate. Between these two limit cases, complex behavior arises and can be addressed starting from either end of the limit.

Fluctuation dominated network dynamics and firing rate models

For small enough and sparse couplings, as well as sufficient noise within the neural mass, the neuronal action potential generations and the connectivity within the neural mass can be assumed to be independent. Under these conditions, all spike correlations will be destroyed and a firing rate model becomes a valid representation of neural mass action (Abbott & van Vreeswijk 1993; see Cessac & Samuelides 2006 for a review). In the limit of large neuron numbers within the mass, N^to, and low firing rates, the total spike train, obtained by summing over the spike trains from all neurons within the mass, will be a Poisson point process with a common instantaneous firing rate p(x,t). Equivalently, the synaptic input Is to a single neuron can be approximated by an average firing rate p(x,t) plus a fluctuating Gaussian contribution. As a consequence the joint probability distribution factorizes and a complete description of the neural mass action is obtained in terms of the first and second order statistical moments. Two further more subtle distinctions can be made. Either the firing rate p(x,t) plus Gaussian noise is used as synaptic input and the neural mass action is described by the average value of neural activity (the mean field) u(t) = E [Zi(t)] and the variance v(t) = E [Zj(t)2] — u2(t). Such finally results in Fokker-Planck approaches which describe the time evolution of the probability P(Z(x,t),t) to find a neuron at x and t in the state Z (Amit & Brunel 1997; Brunel 2000; Brunel & Hakim 1999; Cai et al. 2006). Alternatively, the neural mass action can be expressed directly by the mean firing rate u(t) = E [p(t)] and its variance v(t) = E [p(t)2] — u2(t) (Abbott & van Vreeswijk 1993; Nykamp & Tranchina 2000, 2001; Eggert & van Hemmen 2001). Note that we dropped the explicit dependence on x to simplify our notation. The mean field variables u and v define the 2-dimensional population vector ^(x, t) = (u(t), v(t)) at the location x. As long as the independence condition within the neural mass holds, the reduced dynamic description through ^(x,t)=(u(t),v(t)) is exact. The mean firing rate shows a sigmoid behavior as a function of the synaptic input, which can be intuitively understood as follows: a neural mass shall consist of independent neurons of which each displays a sharp onset of firing at a threshold value © (see Fig. 27).

The thresholds are independent and hence have a Gaussian distribution. The mean firing rate of the neural mass then becomes the well-known sigmoid function and has been carefully parameterized from experimental data of the olfactory bulb (Freeman 1975). Is the independence condition violated though

and correlations are introduced, for instance through correlations within the connectivity weights via learning, the mean field approximation breaks down. Related in spirit to Fokker-Planck approaches, Ventriglia proposed a phenomenological kinetic theory for the study of the statistical properties of neural mass action (Ventriglia 1974, 1978). The kinetic equations capture the time course of the distribution function of the total excitation of a neural mass. The neurons in the mass are characterized by a level of inner excitation which changes when impulses are emitted. The impulses move freely within the neural mass and may be absorbed by other neurons changing their inner excitation level (see also by Grobler et al. 1998; Barna et al. 1988 for extensions of the kinetic approach).

For strong coupling strengths and low level of noise within the neural mass, a different but complementary approach holds using the perfectly correlated state Z(t), that is Z1(t) = Z2(t) = ••• = Z(t). Or in other words, a special case of spike timing is considered: all neurons are synchronized and show the same dynamics Z(t). DeMonte, d'Ovidio & Mosekilde (2003) proposed a method by which the mean field dynamics of a neural mass can be described by a low-dimensional population vector under conditions of global coupling and coherent neural mass action. Global coupling means that each neuron in the neural mass feels the same mean field activity. Their method applies to neural masses of any size and any type of intrinsic dynamics, as well as parameter dispersion. For example, if the underlying neuron model for Zj(t) is a FitzHugh-Nagumo Model, then the population vector *(x,t) = (^1(x,t), ^2(x,t), (x,t), ^4(x,t)) is 4-dimensional where *i(x,t), (x,t) describes the activity of an average FitzHugh-Nagumo neuron and (x, t), ^4(x, t) measures the dispersion of both parameter and phase space. If the neurons desynchronize too much, then the approach of DeMonte et al. (2003) will fail by definition. If, after loss of synchrony, multiple clusters of coherent activity emerge in phase space instead, then it is possible to describe the neural mass action through multiple mean fields. Each of these mean fields captures a single cluster dynamics (Assisi, Jira & Kelso 2005). In cases of parameter dispersion, such emergence of cluster dynamics is common and well-suited for the approach by Assisi et al. (2005). If the constraint of global connectivity within the neural mass is dropped, richer dynamic phenomena become possible such as the appearance of spiral waves (Chu et al. 1994; see Milton 1996) and will be discussed in the next sections. Freeman (1975,1987) proposed another classification of neural mass action which allows spike correlations to be considered. He originally classified the activity of the neural masses into classes named K0, KI and KII sets (K for Katchalsky) according to their functional architecture. K-sets are composed of elements which affect the nature of dynamics including physical components such as the interconnected neurons, the neurochemical environment, etc., but also purely functional components such as the connection topology, the input structure, etc. K0 sets represent the simplest functional architecture which can be viewed as the ensemble average of the activity of independent but similar neurons. In their simplest forms, KI sets are equivalent to two coupled K0 sets, KII sets are composed of K0 and KI sets. However, they are more generally defined and are in principle not always reducible to lower order K sets. In this notation, a K0 set corresponds to the 1-dimensional and hence scalar activity of a neural mass, ^(x,t), whereas KI and KII sets correspond to higher-dimensional vectors ^(x,t).

4.3 Composition of Neural Masses to Large Scale Models

Neural mass models sacrifice realism for a more parsimonious description of the key mechanisms of large scale dynamics. The benefit lies in the possibility of emulating non-invasively obtained brain imaging data such as EEG and MEG. Neural mass models (Beurle 1956; Lopes da Silva et al. 1974; Freeman 1975; Nunez 1974, 1995; van Rotterdam et al. 1982; Jirsa & Haken 1996, 1997; Jirsa et al. 1998, 2002; Robinson et al. 1997, 2002, 2001; Tagamets & Horwitz 1998; Steyn-Ross et al. 1999; Valdes et al. 1999; David & Frison 2003; Breakspear et al. 2006) are based upon this approach. Much of the complexity of the signals arises from the coordination of the interconnected neural masses rather than the intrinsic dynamics of the microscopic unit, the neural mass, of the large scale network. A neural mass at location x is locally connected to its neighboring neural masses and globally connected to far distant neural masses at locations x'. In the following, physical space is always assumed to be one-dimensional, x e S, but the mathematical treatment formally extends trivially to two and three dimensions. Note that though the formal extension to higher dimensions is not difficult, new dynamic network phenomena such as spirals may emerge due to the higher dimension (see Nunez (1995) for a discussion of spherical geometries). If the network dynamics described by (17) were linear, then the mapping $ : Z(x,t) ^ ^(x,t) would result in the following large scale dynamics for the neural mass action ^(x,t) with Q = N and S = H

Q(V(x,t)) + J J h (x - x') S (^(x - x',t - t')) dt'dx'. (46)

However, in general the intrinsic dynamics N and the activation function H are nonlinear and residual terms arise which are here notationally absorbed in Q and S. The intrinsic, sometimes also called endogenous, dynamics N of the neural mass action is defined by the temporal evolution of ^(x,t) in absence of all incoming signals including the connections to other neural masses. In the following we will discuss representative models from this line of approach and characterize the various entry points towards large scale network modeling.

We place particular emphasis on the functional effects that the variation of structural properties, such as local and global connectivity and time delays, implies.

Amari's Neural Field Model 1977

A classic paper on networks with no delay and symmetric and translationally invariant connection topologies is Amari's study of neural fields (Amari 1977). Amari discussed spatially and temporally continuous fields *(x,t) with local fixed point dynamics as intrinsic dynamics. Then the field equations may be written as

Td^(t,t') = — * (x, t) + j h (x — x') S (* (x', t)) dx' + c + s(x, t) . (47)

n where S is strongly nonlinear, typically the Heaviside function, and h(x — x') is excitatory for proximate connections and inhibitory for greater distances (see Fig. 28 and 29). s(x,t) denotes external input and c a constant resting potential and background activity.

In this type of scalar neural fields, oscillations are not possible, but locally excited regimes may exist and self-sustain with no input s(x,t) = 0, which is believed to be a candidate for the neuronal basis of working memory (Amit 1989). If input is provided, then the locally excited regions travel in the direction of increasing field value *(x,t) until they get pinned at the stimulus location.

If several stimuli are provided, then the details of stimulus location and the presence of already excited local regions will determine the typically

multi-stable final network dynamics. Characteristic examples are shown in Fig. 30(a), b and c.

In all cases, the final stationary network state will be a fixed point at-tractor. It was these properties, which attracted the attention of neural modelers who applied these fields to a variety of phenomena ranging from working memory (Amit 1989) to motor movement preparation (Erlhagen & Schoner 2002). If two or more layers are coupled (Amari 1977), then a more complex dynamics arises allowing for oscillatory and traveling wave phenomena.

The Neural Field Models of Wilson & Cowan (1972, 1973) and Nunez (1974)

Hugh Wilson & Jack Cowan (1972, 1973) and Paul Nunez (1974) independently considered twocomplementary approaches, of which each is based upon

Fig. 30(a). The space-time diagram of an Amari field is shown. Initially the neural field is not excited, then a stimulus is introduced around 1000ms at location x = 35 (space is in arbitrary units). At stimulus offset around 1300ms, the neural field sustains its local excitation. At a later time point, another stimulus is introduced at x=10 for 300ms. Here the neural field also persists after stimulus offset. Such persistent activity serves a s a simple model for working memory

Fig. 30(a). The space-time diagram of an Amari field is shown. Initially the neural field is not excited, then a stimulus is introduced around 1000ms at location x = 35 (space is in arbitrary units). At stimulus offset around 1300ms, the neural field sustains its local excitation. At a later time point, another stimulus is introduced at x=10 for 300ms. Here the neural field also persists after stimulus offset. Such persistent activity serves a s a simple model for working memory

Fig. 30(b). The same situation is shown as in Figure 30a, only the second stimulus is provided closer in space, x = 25, to the first stimulus and annihilates the excitation at x= 35. The second local excitation persists unaltered t

In foil m

Fig. 30(b). The same situation is shown as in Figure 30a, only the second stimulus is provided closer in space, x = 25, to the first stimulus and annihilates the excitation at x= 35. The second local excitation persists unaltered

Fig. 30(c). The same situation is shown as in figure 30b, only the second stimulus is now provided even closer in space, x = 30, to the first stimulus than before. This time it does not annihilate the excitation at x= 35, on the contrary, both excitations move towards each other and merge into one excitation. In the figure, it appears that the excitation at x = 35 moves more than the other, which is true two sets of locally coupled neural masses of inhibitory and excitatory neurons. Wilson & Cowan considered the firing rate as the neural mass action; Nunez considered synaptic action which is the proportion of active synapses at time t and linearly related to dendritic currents. The firing rate of neural masses has been referred to as pulses and the synaptic action as waves (Freeman 1975). Jirsa & Haken (1996, 1997) showed that both models are equivalent and can be transformed into each other using so-called pulse-wave and wave-pulse conversions, which are independently experimentally accessible (Freeman 1975). Both models consider time delays via propagation. Delays are absent in Amari's model and hence constrains the latter's applicability in a biologically realistic scenario to small patches of cortical tissue. Time delays are of increasing importance, the larger the scale of the network is.

In Nunez's early work (1974), his focus was on identifying the dispersion relations of the linearized neural field dynamics given specific distributions of intracortical and corticocortical fiber systems. The intracortical fiber system is constrained to the gray matter and its axons make connections within a few millimeters; the corticocortical fiber system constitutes the white matter and connects areas across the entire cortex with axonal lengths of several centimeters in the human (Abeles 1991; Braitenberg & Schuz 1991), in some cases reaching lengths of up to 15 to 20 centimeters (Nunez 1995). The excitatory synaptic action ^(x,t) and inhibitory synaptic action (x,t) compose the neural mass action and define one excitatory and one inhibitory layer (see Fig. 31). The dynamics of the two-dimensional neural field is governed by the following equation d^(x,t) _ -^(x,ty+s(x,t)+J Jh (x - x,,v) ^^ lx - X ^ dvdx',

dt n o where ^(x,t) = (^1(x,t), ^2(x,t)), s(x,t) is the input to the two layers, h(x-x',v) defines a matrix describing the distribution of axonal fibers, S is the sigmoid firing rate and Q defines the spatial extent of the neural sheet. Due to the finite transmission speed v, there is a time delay 1x x 1 via propagation.

The connectivity function h(x-x',v) is a 2 by 2 matrix, since ^(x,t) is a 2-dimensional vector field, and considers both intracortical and corticocorti-cal fibers collapsed into one distribution function. The synaptic influence is assumed to diminish in proportion to its density, in particular Nunez extrapolated h from mouse data (Nunez 1995) to assume an exponential form, h(x) = exp(-|x|/a)/2a, (49)

as illustrated in Fig. 32, with the rate of drop-off captured by the parameter a.

The inhibitory connectivity is of short range and the excitatory connectivity is of long-range since the latter is dominated by the corticocortical fiber

Was this article helpful?

## Post a comment