T

Fig. 3. A trajectory in state space. Three axes, x1, x2, and x3 are shown, which could represent the states of three neurons. By plotting the values of the network state variables (xi, x2, x3) at times t, t + 1, and t + 2, a succession of positions can be linked to form a trajectory through state space. The trajectory is shown here as a bent arrow

Fig. 3. A trajectory in state space. Three axes, x1, x2, and x3 are shown, which could represent the states of three neurons. By plotting the values of the network state variables (xi, x2, x3) at times t, t + 1, and t + 2, a succession of positions can be linked to form a trajectory through state space. The trajectory is shown here as a bent arrow

Distances between points X2 and Y2 can be measured by some metric, like the Euclidean distance:

dxY = v(yi -xti) +(^2 -x2) -) + ■ ■■(ytm -xD ■

Other metrics are also suitable. For example, the Hamming distance, which is just the number of digits that are different between two binary numbers (e.g., [1 0 1] and [0 0 1] have a Hamming distance of 1), could be used for a network with only binary neurons. The rate of growth in distance between two initially close trajectories can be quantified by the Lyapunov exponent X (Wolf A et al., 1985), which is related to the distance between trajectories at two points in time. This is illustrated in Fig. 4, where trajectories begin from two points that are close together in state space. The distance between these two starting points is measured as dstart. The network is allowed to evolve over time from each point, causing two trajectories to be traced out in state space. After a time T, the distance between two points on the trajectories is measured as dfinish. The Lyapunov exponent in bits/sec is then given by:

finish d start

In practice it is good to keep T small so that X will closely approximate the instantaneous divergence between the trajectories. By manipulating this equation, we can more clearly see how X describes the exponential rate at which two trajectories separate in state space after T time steps:

Fig. 4. The Lyapunov exponent quantifies dynamics. A, Converging trajectories. Two trajectories in state space, shown as curved lines with arrowheads, are separated by a distance dstart at time t. At time t + T, they are separated by a distance of dfinish. The ratio (dfinish/dstart) can be used to determine whether or not the trajectories are flowing together over time. In this case, they become closer over time, indicating attractive dynamics. B, Parallel trajectories. Here, the ratio (dfinish/dstart) is one, indicating neutral dynamics. C, Diverging trajectories. Here the ratio (dfinish/dstart) is greater than one, indicating chaotic dynamics

The Lyapunov approach to quantifying discrete network dynamics has been developed by Derrida and colleagues (Derrida B and Y Pomeau, 1986; Derrida B and G Weisbuch, 1986), as well as by others (Bertschinger N and T Natschlager, 2004). This method is especially useful when working with simulated networks, where it is easy to start a network from a particular point in state space by just specifying the values of all the state variables. The simulation can then be run for T time steps to produce a trajectory. It is also easy to produce a trajectory from a nearby point in state space and to measure the resulting distances between trajectories. For living neural networks, however, this approach is more difficult to implement, as it is presently impossible to specify all the state variables at a given time. Electrical stimulation can overcome this to some extent by causing a subset of neurons to all be active at the same time; trajectories after stimulation can then be measured. But background activity can not be completely controlled, and this has been found to play a large role in determining network responses to stimulation (Arieli A et al., 1996).

There are three general types of dynamics that can be identified with this method. Attractive dynamics is characterized by X < 0, causing nearby trajectories to become closer over time (Fig. 4A). Systems dominated by attractive dynamics are very stable, and have one or more basins of attraction. In the attractor model that we previously described, these basins would lead to attractor states that could represent configurations stored in long-term memory. However, these networks are so stable that it is difficult to control their trajectories and steer them away from attractors. Perturbations to the network are mostly ineffective at changing the state that it settles into. Neutral dynamics is characterized by X « 0, causing nearby trajectories to preserve distance over time (Fig. 4B). Here, perturbations to the network produce commensurate changes in output. Systems with predominantly neutral dynamics are therefore marginally stable, meaning that trajectories will largely persist in their given course under mild perturbations. With the appropriate inputs, it is possible to control trajectories in networks with neutral dynamics. Chaotic dynamics is characterized by X > 0 causing nearby trajectories to become more separated over time (Fig. 4C). Small perturbations are amplified, making these networks intrinsically unstable and difficult, but not impossible, to control (Ding M et al., 1996; Ditto WL and K Showalter, 1997).

The Lyapunov exponent can be used to describe trajectories in all regions of state space that are visited by the network. However, just because one region of state space shows attractive dynamics does not necessarily mean that all other regions will also. Figure 5 shows that state space can contain a variety of features: fixed points, saddle points, and limit cycles. The trajectories passing through fixed points are all either leading into the point or leading away from it. If they are leading into the point, then there is a stable fixed point; if they are leading away from it, then there is an unstable fixed point. The attractor network model discussed previously uses stable fixed points to encode long-term memories. Unstable fixed points are repulsive to trajectories

Fig. 5. Example features of state space. A, A stable fixed point has only trajectories leading into it. B, A saddle point has trajectories leading into it along one axis, but leading out of it along another axis. C, D, Limit cycles are trajectories that are closed on themselves

and are rarely visited by the system. Saddle points are both attractive and repulsive, depending on the direction from which they are approached. They have trajectories that lead into them from one direction, and trajectories that lead away from them in another direction. Trajectories will often move toward a saddle point, only to be repulsed from it when they get too close. They then may be drawn toward another saddle point and repulsed again, displaying itinerant behavior as they visit different saddle points (Rabinovich M et al., 2001). Limit cycles occur when the network continually oscillates in state space, as represented by trajectories that form closed loops with themselves.

To fully characterize the dynamics, one would have to map the entire state space of the system. This is in practice impossible, so most experiments report only the dynamics seen in a small subset of the state space. Fortunately, characterizing the dynamics in some reduced dimensional space is often good enough to get an approximate picture of the dynamical system as a whole. The process of knowing how to reduce dimensionality and which variables may be omitted is beyond the scope of this chapter. The reader is referred to (Abarbanel HD and MI Rabinovich, 2001; Abarbanel MDI, 1996; Kantz H and T Schreiber, 2004; Strogatz SH, 1994) for more detailed discussions on this topic.

5 How Connectivity Influences Local Network Dynamics

With the methods described above, we can now examine how network connectivity influences dynamics. Since it is difficult to manipulate connectivity in living neural networks, we will only discuss here results from computational models. In what follows, we will introduce a simple model with tunable connectivity. We will show that this model qualitatively captures the main features of network activity observed in some experiments. We will then manipulate the connectivity of the model to explore its effects on dynamics.

Consider a network model with N neurons or processing units, each allowed to be in only one of two states, either active (1) or inactive (0). To allow for complete generality, let us connect each unit to every other unit (Fig. 6A). We can later control the strengths of these connections, setting some of them to zero, so as to sculpt the connectivity of the network. Units can become active in one of two ways, either through spontaneous or driven activity. Each unit will have some small probability, psp0nt, of being spontaneously active at a given time step. A unit may also become active if it is driven by another active unit that makes a connection with it. If a unit is not spontaneously active or driven at a given time step, it will be inactive.

0 0

Post a comment