Os 0 Os

time

Fig. 22. Phase portrait and time series for the Hindmarsh-Rose model. Parameters, I = 3.2; a = 1.0; b = 3.0; c = 1.0; d = 5.0; s = 4.0; x0 = -1.60; r = 0

x equations as an additive term - the same as I although negative in size. Also, setting r small has the effect of ensuring that z evolves on an intrinsically slower time scale than x and y. Together, these constructions have the effect of ensuring that z acts like a slowly varying synaptic current, albeit one which, due to the x term in (45), is also state dependent. Hence as z becomes more negative, it acts like the bifurcation parameter in the FitzHugh Nagumo model and precipitates - via a subcritical Hopf bifurcation - a run of depolarizations. However, due to the x term in (43), these depolarisations have the (relatively slow) effect of increasing z. Eventually the depolarizations are terminated as the reduced effective contribution of z to total synaptic current restabilizes the fixed point via a saddle node bifurcation. Hence the system, as shown in Fig. 23, exhibits a burst of spikes interspersed by quiescent phases. Indeed with r = 0.006, the system exhibits this pattern in a chaotic fashion.

Note that, as discussed, the fast spikes are far more evident in the dynamics of x whereas the dynamics of z are more sensitive to the bursts.

Hindmarsh Rose
Fig. 23. Phase portrait (a) and time series (b,c) for the Hindmarsh-Rose model. Parameters, I = 3.2; a = 1.0; b = 3.0; c = 1.0; d = 5.0; s = 4.0; x0 = -1.60; r=0.006

Such burst-spiking is of particular interest to classes of thalamic neurons and many cortical neurons during slow-wave sleep, whereby this activity is observed synchronously across the scalp (McCormick & Bal 1997). Interactions between such systems can be introduced by coupling of one of the fast variables, such as

^djT = - ax\ <2 + by2,2 - zi,2 + I + C (x2,i - £1,2), (44)

where C is the coupling parameter. Two such simulations are shown in Fig. 24, where blue and green time series denote each system. In the top row, with c = 0.35, the bursts are coincident but the spikes are often discordant. However with c = 0.5, the spikes are also synchronized. This interesting phenomenon, studied in detail by Dhamala et al. (2004) of burst and then spike synchrony, has been observed experimentally.

3.4 Coupled Chaos in a Mesoscopic Model

The Hindmarsh-Rose model introduces an extra term, incorporating a slow calcium current, into a planar model of an excitable neuron. An alternative extension of planar models is to introduce a single variable representing a feedback from an inhibitory neuron Z. The inhibitory and excitatory neurons interact via synaptic currents induced through their mutual connectivity. Such a model takes the form (e.g. Larter et al. 1999), dV = QNam^ (V) x (V - VNa) + gKn (V) x (V - VK) + 9l x (V - VL)

Local connectivity is parameterized by the coupling parameters a between inhibitory Z, and excitatory V cells and via input from the external noise

Hindmarsh Rose
Fig. 24. Coupled Hindmarsh-Rose systems with c = 0.35 (top row) and c = 0.5 (bottom row)

Fig. 25. Generalized chaotic synchronization in a mesoscopic neuronal model. (a) Chaotic attractor. The orbits are organized around a manifold that is homoclinic to the unstable spiral (b) Time series of excitatory membrane potentials in two coupled systems showing apparent synchronization. (c) Their co-evolution shows a smooth manifold slightly off the state of identical synchrony V1 = V2

Fig. 25. Generalized chaotic synchronization in a mesoscopic neuronal model. (a) Chaotic attractor. The orbits are organized around a manifold that is homoclinic to the unstable spiral (b) Time series of excitatory membrane potentials in two coupled systems showing apparent synchronization. (c) Their co-evolution shows a smooth manifold slightly off the state of identical synchrony V1 = V2

term I. The functions F and G model the feedback between the inhibitory and excitatory cells. Within physiologically realistic parameter values, such a system can exhibits chaotic dynamics, as shown in Fig 25 (a), organized around a homoclinic orbit.

Synaptic coupling between the excitatory neurons in two such populations of cells allows construction of a mesoscopic neuronal model - a system on the intermediate scales between single neurons and the large scale systems considered in the following section. An example of synchronization between two such subsystems is illustrated in Fig. 25 (b-c), where a single parameter in each system has been set with a small mismatch (all other parameters are equal). Whilst the time series appear identical (panel b), a plot of the values of V1 versus V2 (panel c) reveals that their co-evolution, whilst close to the diagonal is nonetheless confined to a nearby smooth manifold. This form of non-identical synchronization is known as generalized chaotic synchronization (Afraimovich et al. 1986, Rulkov et al. 1995). Further details and examples of more complex behaviors - such as intermittency, scale-free dynamics and travelling waves - can be found in Breakspear et al. (2003, 2005).

This concludes our survey of basic, small-scale neural systems. We hope to have illustrated the power of combining analysis and geometry in elucidating some of the fundamental properties of neurons. We now turn to macroscopic models.

4 From Small to Large Scale Models

Large scale neural network models are thought to be involved in the implementation of cognitive function of the brain (Mesulam 1990; Bressler 1995, 2002, 2003; Bullmore et al. 1996; Mountcastle 1998; McIntosh 2000; Bressler & Kelso 2001; Jirsa 2004; Bressler & Tognoli 2006; Bressler & McIntosh 2007). To understand the neural basis of cognition, theoretical and analytical means must be developed which are specifically targeted to the properties of large scale network dynamics. Such theoretical understanding will also guide the interpretation of the enormous data sets obtained from non-invasive brain imaging. The functional expression of a cognitive operation seems to require the co-activation of certain subnetworks. Such co-activation does not necessarily require a simultaneous activation of all network components, but may be represented in a characteristic spatio-temporal network dynamics with both simultaneous and sequential activations. The properties of the network dynamics will crucially depend on the interconnectivity of the network components and their dynamics (Sporns 2002; Sporns & Tononi 2002, 2007; Jirsa 2004; Beggs et al. 2007). The goal of any large-scale description of neural dynamics is to reconstruct all relevant spatiotemporal dynamics of the neural system while preserving the mechanisms which give rise to the observed dynamics. Large scale models have the implicit assumption to be based upon neurocomputational units, which are more macroscopic than single neurons. This approach is to be juxtaposed with the high-dimensional computation of the full network composed of microscopic complex neurons with dendritic and axonal ion channel dynamics, as well as pre- and postsynaptic processes. Large scale models also bear the promise that they provide insight into the underlying dynamics-generating mechanisms of the network due to their reduced complexity. Finally, large scale models are easier and less time-consuming to be solved computationally. The following sections discuss the various schools of thought in large scale network modeling and characterize these from the perspective of anatomical and functional connectivity, the latter identified with the dynamics of the network.

4.1 Non-reducible Dynamics of Neuronal Ensembles

A large scale model is composed of microscopic units or atoms which do not represent individual neurons, but rather complexes, also referred to as neural masses (Beurle 1956), capturing the non-reducible dynamics of a set of neurons. Such complexes may either be localized in physical space and defined in a volume element at a location x, or distributed over physical space and are defined functionally (e.g. in K-sets as discussed below (Freeman 1975, 1992)). Though the former is more common, in practice the two variants often coincide due to stronger local connectivity and the resulting co-activations ("what wires together, fires together"). Unlike many subcortical structures, in which neurons are packed into nuclei, the cortical sheet appears at first sight as a dense homogeneous medium with no obvious demarcation of its components. Corticocortical columns typically consist of 5,000 to 10,000 neurons, macrocolumns contain 105 to 106 neurons (Nunez 1995). Yet, there are a number of anatomical tracing studies which indicate mutual anatomical and functional parcellation (Szentagothai 1975; Mountcastle 1978). For instance, macrocolumns form functional units in sensory areas with homogeneous tuning properties inside the unit, but sharp differences amongst neighboring units (Mountcastle 1978). For our purposes, the neural mass is a highly connected set of neurons, sharing common input and output pathways and specialized low-level function. The activity of a neural mass (also known as neural mass action) in a large scale model is described by an m-dimensional vector variable ^(x, t) = (*i(x,t), *2(x,t), ••• , ^m(x,t)) at a discrete location x in physical space and a point t in time. The variable ^(x,t) is also referred to as a neural population, neural assembly or neural ensemble activity. If the distance between neighboring neural masses is infinitesimally small, then the physical space x is continuous and ^(x,t) is referred to as a neural field. Since the neural mass action is physically generated by the N neurons within the neural mass, there will be a mapping $ : Z(x,t) ^ ^(x,t), which unambiguously relates the high-dimensional neuron activity Z(x,t) = (Z1(x, t),Z2(x,t), ••• , ZN(x, t)) to the neural mass action ^(x,t). Zj(t) is the n-dimensional state vector of the i-th neuron with i= 1,...,N. For concrete-ness, a neural mass may contain N=10,000 neurons with n=2 in case of a FitzHugh-Nagumo neuron model. The situation is shown in the cartoon on the bottom of Fig. 26. Here a cortical sheet is shown which is decomposed into color-coded patches representing neural masses. Within a neural mass the local connectivity of a single neuron is illustrated through the density of its connections (red squares) which decreases with increasing distance. The partial overlap of the neural masses indicates that synaptic connections of a neuron may belong to different neural masses. The critical step in the development of a large scale model occurs through the mapping $ : Z(x, t) ^ ^(x, t) when the activity Z(x, t) = (Z1(x, t), Z2(x, t), ••• , ZN(x, t)) of a given neural mass is replaced by its neural mass action ^(x, t) = (^i(x, t), (x,t), ••• , ^m(x, t)) where m << N. The nature of this relation between neuron activity Z(x,t) and neural mass action ^(x,t) will be generally non-trivial and involves a mean-field reduction which will be discussed in the next section. On the top of Fig. 26 the neural network dynamics is now captured by locally coupled neural mass actions ^(x,t) assigned to each neural mass at location x = X;. Each neural mass is locally (as indicated at location X5) and globally (as indicated at location X1) connected.

Rather than solving the complete network for the state vectors Z(x, t) of all neurons, now the large scale network can be solved using the neural mass action ^(x,t) as indicated in the following: A large scale model representation is successful if the large scale model simulation provides the same neural mass

0 0

Post a comment