Connectivity and Dynamics in Local Cortical Networks

John M Beggs, Jeffrey Klukas and Wei Chen

Department of Physics, Indiana University

727 East Third St., Bloomington, IN 47405-7105

[email protected]

Recent experimental work has begun to characterize activity in local cortical networks containing thousands of neurons. There has also been an explosion of work on connectivity in networks of all types. It would seem natural then to explore the influence of connectivity on dynamics at the local network level. In this chapter, we will give an overview of this emerging area. After a brief introduction, we will first review early neural network models and show how they suggested attractor dynamics of spatial activity patterns, based on recurrent connectivity. Second, we will review physiological reports of repeating spatial activity patterns that have been influenced by this initial concept of attractors. Third, we will introduce tools from dynamical systems theory that will allow us to precisely quantify neural network dynamics. Fourth, we will apply these tools to simple network models where connectivity can be tuned. We will conclude with a summary and a discussion of future prospects.

1 Introduction

The advent of fMRI and other imaging technology has spawned a deluge of research examining how the brain functions at the macroscopic level. This work, which treats each voxel as the basic unit of analysis, has yielded tremendous insights as to how networks of cortical regions cooperate to produce motor activity (Jantzen KJ et al., 2005; Rowe J et al., 2002), memory (Fletcher P et al., 1999), cognition (Mechelli A et al., 2004; Stephan KE et al., 2003) and emotion (Canli T et al., 2002). But within each voxel there lie perhaps tens of thousands of neurons that are connected into local networks, performing elementary computations that are fundamental to the brain's higher functions. Relatively little experimental work has been done at this mesoscopic level, despite the existence of a large literature on neural network theory and models. This chapter will focus on the relationship between network connectivity and dynamics at this level, with the hope that the principles uncovered here will be generally applicable to networks at larger scales as well. In addition, this chapter will emphasize local networks of cortical neurons, since this is the area within the mesoscopic level where the most experimental work has been done.

2 Attractors in Early Models of Local Neural Networks

The simplest neural network models only have projections from one layer of neurons to the next, having what is called "feed-forward" architecture. While these models can do many impressive things, they can not exhibit dynamics in the true sense because their outputs are never fed back as inputs to the network. Their activity changes from layer to layer, but their final output is given only at one point in time. By contrast, in recurrent networks projections from some or all of the neurons are connected back to the inputs of the network through recurrent collaterals. Recurrent networks can therefore generate activity in a given layer that changes over time with each loop of processing, thus demonstrating dynamics. Since real brains are filled with recurrent and not purely feed-forward connections, it seems that recurrent networks are also much more realistic models of connectivity in living neural networks. For these reasons, we will only consider recurrent networks in what follows.

Much of the early work in recurrent network models was concerned with memory storage and retrieval. These simplified models demonstrated how groups of neurons could collectively store a spatial activity pattern embedded in connection strengths. An example of such a model is shown in Fig.1. The five pyramidal, excitatory neurons have all-to-all recurrent connections. When three of the neurons are activated at the same time, synaptic connections between the active collaterals and active neurons are strengthened. This rule for changing synaptic strengths is called the "Hebb rule" after Donald Hebb who most famously proposed it (Hebb DO, 1949), and is often summarized by the phrase "cells that fire together, wire together." Once these synaptic connections are strengthened by a Hebbian rule, the network has a distributed memory trace of the original configuration of the three active cells. The idea of synaptic strengths encoding memory is not new and can be traced back to Cajal (Ramon Y Cajal S, 1909), but the dynamics of this simple model was not appreciated until decades later. When a fragment of the original, stored configuration of cells is presented to the network, the network will have a tendency to use the fragment to reconstruct the original stored configuration. Active cells will recruit other cells from the stored pattern through recurrent collaterals and recently strengthened synaptic connections. The configuration of the network at each time step will thus become progressively more similar to the originally stored configuration. One way of describing this is to say that the network is attracted to the stored configuration. If the network configurations could be symbolized by binary strings and arrows could represent transitions over time, we would have [00100] ^ [10100] ^ [10110]. But note that several other initial configurations could also lead to this final stored configuration. For example: [10000] ^ [10010] ^ [10110], and [00010] ^ [00110] ^ [10110] are also pathways. All of those configurations that eventually lead to the stored configuration are said to be in the basin of attraction of the stored configuration. The stored configuration [10110] is called an attractor in this network. In larger models, it is possible to have many independent configurations stored as attractors within the same network.

The model in Fig.1 is representative of a whole class of influential models that employed recurrent connectivity and Hebbian learning to store spatial patterns. A precursor of this class was proposed by Steinbuch

Fig. 1. An attractor in a simplified recurrent network model. Network has five pyramidal cells. Straight lines represent axon collaterals, here wired to have all-to-all connectivity. A, Tabula rasa: a stimulus pattern activates three neurons, shown in black. B, Learning: Hebbian plasticity strengthens connections between active neurons and active axon collaterals, shown as triangular synaptic connections. C, Cue: some time later, a fragment of the original stimulus activates the middle neuron. D: Beginning recall: the active neuron now drives the newly strengthened synapses, shown in black. E, Further recall: activity in these new synapses activates another neuron from the stimulus pattern. F, Total recall: collective activity now drives the third neuron from the original pattern. Over time, the state of the network became more similar to the activity pattern seen in A. Note that any partial cue of the original pattern could lead to re-activation of the original pattern. After learning, the network configuration is said to be attracted to the state shown in A

Fig. 1. An attractor in a simplified recurrent network model. Network has five pyramidal cells. Straight lines represent axon collaterals, here wired to have all-to-all connectivity. A, Tabula rasa: a stimulus pattern activates three neurons, shown in black. B, Learning: Hebbian plasticity strengthens connections between active neurons and active axon collaterals, shown as triangular synaptic connections. C, Cue: some time later, a fragment of the original stimulus activates the middle neuron. D: Beginning recall: the active neuron now drives the newly strengthened synapses, shown in black. E, Further recall: activity in these new synapses activates another neuron from the stimulus pattern. F, Total recall: collective activity now drives the third neuron from the original pattern. Over time, the state of the network became more similar to the activity pattern seen in A. Note that any partial cue of the original pattern could lead to re-activation of the original pattern. After learning, the network configuration is said to be attracted to the state shown in A

(Steinbuch K, 1961; Steinbuch K and H Frank, 1961) in his matrix memory model, which used co-activation to imprint connections and to associatively store information. Anderson's autoassociator model (Anderson JA et al., 1977), the Hopfield model (Hopfield JJ, 1982, 1984; Hopfield JJ and DW Tank, 1986) and models analyzed by Cohen and Grossberg (Cohen MA and S Grossberg, 1983) all used Hebbian learning and had all-to-all connectivity. An emergent property of these models, stemming in part from their connectivity, was that information could be stored in attractors, and that network activity would tend to settle into these attractors (Amit DJ, 1989). The models of Hopfield and Grossberg are also noteworthy for other reasons (connecting statistical physics to neural network theory; using time as an important variable in network dynamics) that are beyond the scope of this chapter. For our purposes, it is important to note that these models used recurrent connections and proposed that spatial information could be stored in attractors. Versions of this class of model were later elaborated by neuroscientists to explain how the hippocampus might store and retrieve memories (Rolls ET, 1990; Skaggs WE and BL McNaughton, 1992).

It is also worth noting that much work has been done on how even single neurons with recurrent connections can store temporal information in spike sequences (e.g., Foss, Longtin, Mensour and Milton, 1996; Foss and Milton, 2000). These sequences can be considered attractors, although they may not necessarily store spatial patterns of activity across many neurons, as we have been discussing. For further coverage of this interesting topic, the reader is referred to Sue Ann Campbell's chapter in this handbook.

The simple class of models which store spatial patterns of activity was appealing to computational neuroscientists for several reasons. First, it seemed biologically plausible. As stated before, recurrent collaterals are abundant in the brain, and there is ample evidence that synapses can be strengthened according to a Hebbian rule (Kelso SR et al., 1986; Kirkwood A and MF Bear, 1994). Second, the dynamics of the model seem to mimic the way memories are subjectively recalled. Presenting a cue or fragment of information is often enough to elicit more detailed information that was originally associated with it. Just as viewing a fragment of a picture can often evoke a complete image from memory, so also a few active neurons can cause the model to complete the pattern that was originally stored (Hopfield JJ, 1982). Third, these models allowed several patterns to be stored within the same network, a property that clearly would be useful in real brains. Because of their plausibility and impressive emergent properties, these simple network models caused many researchers to expect that local circuits in mammalian cortex would store memories in the form of attractors.

3 Repeating Activity Patterns and the Influence of Attractor Models

Is there any evidence for attractors, as described by the above model, in physiological recordings? In order to evaluate this form of the attractor hypothesis correctly, several experimental requirements need to be met. First, since activity is hypothesized to be distributed among many neurons, multiple recording sites are needed. Second, network activity must visit some configurations more often than would be expected by chance. If all network activity configurations were visited equally often, there would be no attractors. But if some configurations are visited repeatedly and more often than would be expected by chance, then there is at least a possibility that attractors exist in the network. Third, when the network is in a configuration that is close to one of its stored configurations, network activity should become progressively more similar to the stored configuration over time. This indicates that the network is being drawn into the attractor. Fourth, these repeatable configurations need to be stable over time if they are to serve as a substrate for information storage.

Among the first recordings that fulfilled some of these requirements were those from Abeles and colleagues, who observed temporally precise spike sequences in primate cortex (Abeles M et al., 1993; Ben-Shaul Y et al., 2004). They reported that spike triplets, which they later called "synfire chains," reproducibly appeared while monkeys were engaging in particular stages of cognitive tasks. In addition, these sequences occurred more often than would be expected by chance, under the assumption that spike trains can be modeled as a random Poisson process. Although some researchers later questioned whether the synfire chains reported by Abeles and colleagues were truly statistically significant (Baker SN and RN Lemon , 2000; Oram MW et al., 1999), other groups gradually began to report repeating activity patterns as well. Recordings from rat hippocampus showed that distributed patterns of neurons became active as rats made their way through a maze (Wilson MA and BL McNaughton, 1993). Whenever a rat revisited a portion of the maze in the same way, a similar pattern of activity would appear (Brown EN et al., 1998; Skaggs WE et al., 1996). These similarities were statistically significant, and suggested that the activity configuration somehow represented spatial or cue information. Interestingly, these patterns were later found to significantly reappear during subsequent, but not previous, sleep sessions (Lee AK and MA Wilson, 2002; Louie K and MA Wilson, 2001; Nadasdy Z et al., 1999). This suggested that the activity patterns encoded the previous day's maze running session and were being consolidated during sleep (Wilson MA, 2002), a hypothesis that is still somewhat disputed. Less controversially, these data indicated that the reproducible activity patterns had long-term stability and could serve as a substrate for information storage (Lee AK and MA Wilson, 2004). Reproducible activity patterns were also found in the cortex-like structure HVC (high vocal center) of song birds during song learning and production (Hahnloser RH et al., 2002). The temporal precision of these activity patterns like this was astoundingly high, being 1 millisecond or less (Chi Z and D Margoliash, 2001). Activity patterns observed in song birds also had long-term stability and replayed during sleep (Dave AS and D Margoliash, 2000; Deregnaucourt S et al., 2005), indicating that they too could serve to store information. Reproducible activity patterns have now been found in a variety of in vivo systems ranging from visual cortex (Kenet T et al., 2003), and the olfactory bulb (Spors H and A Grinvald, 2002) to the brain stem (Lindsey BG et al., 1997). Collectively, these data demonstrate that distributed, reproducible activity patterns with long-term stability exist in the intact brain.

But did these patterns arise because many different brain areas were acting together? It remained to be seen whether isolated portions of brain could sustain reproducible activity patterns. Yuste and colleagues used calcium dyes and a scanning two-photon microscope to image activity from hundreds of sites in acute slices of mouse visual cortex (Cossart R et al., 2003; Mao BQ et al., 2001). They reported that neurons became active in particular patterns that reoccurred more often than would be expected by chance. Because the microscope had to scan over so many neurons, it took about one second before the scanning laser could return to a given neuron to image it again. Thus, they were able to image activity over the cortical slice network at a temporal resolution of about 1 second. This exciting work demonstrated that neocortical tissue in isolation spontaneously produced repeatable activity patterns, and raised the possibility that local circuit connectivity, to the extent that it was preserved in the slice, was sufficient to support these patterns. Further evidence that local networks were enough to generate attractor-like patterns came from work with neural cultures grown on 60-channel multielec-trode arrays. Using cultured slices prepared from rat cortex, Beggs and Plenz (Beggs JM and D Plenz, 2004) showed that reproducible activity patterns had a temporal precision of 4 milliseconds and were stable for as long as 10 hours (Fig. 2). While these cultures were prepared from slices that preserved some of the intrinsic cortical circuitry, they were grown for three weeks in isolation from sensory inputs. Thus, the activity patterns that arose were very likely to have been the result of self-organizing mechanisms (e.g., Hebbian rules, homeostatic regulation of firing rate) present at the neuronal and synaptic levels. As even further evidence that repeating activity patterns can result from self-organization, Ben-Jacob and colleagues (Segev R et al., 2004) have demonstrated that networks of dissociated cultures produce repeating activity patterns. These cultures are prepared from suspensions of individual neurons that are then poured over an electrode array and grown in an incubator for several weeks. As a result, these preparations do not preserve intrinsic cortical circuitry at all, even though they may match the proportions of excitatory and inhibitory cells found in cortex. Collectively, this work indicates that long-lasting, temporally precise, reproducible activity patterns can readily form in isolated cortical tissue. The fact that even dissociated cultures can generate

Fig. 2. Reproducible activity patterns from an isolated cortical network. A, On the left is an organotypic culture from rat somatosensory cortex (containing ~50,000 neurons) pictured on a 60-channel multielectrode array at 2 days in vitro. Electrodes are seen as small black dots at the end of straight lines. Electrode tips are 30 ^m in diameter and the inter-electrode distance is 200 |J,m. B, On the right is the local field potential signal recorded from one electrode, low-pass filtered at 50 Hz. The dashed line is a threshold set at —3 standard deviations. The sizes of the dots represent the magnitudes of the suprathreshold field potentials. B, The raster plot of activity from all electrodes is shown for one minute. Columns of dots indicate nearly synchronous bursts activity on many electrodes. Activity bursts are separated by quiescent intervals of several seconds. C, The period of suprathreshold activity near 50 seconds is binned at 4 ms, showing that activity is not actually synchronous at higher temporal resolution. Activity here spans three bins and is preceded and terminated by bins with no activity. D, The activity shown in B is presented as a spatio-temporal pattern on the multielectrode array grid. In this case, a pattern of three frames is shown. E, Six cases of spatio-temporal activity patterns are shown that were significantly repeating in a one hour period. Here active electrodes are shown as darkened squares on the electrode grid, where darker squares indicate larger amplitude signals and lighter squares indicate smaller amplitudes. Next to each pair of patterns is the time, in minutes, between observations of the patterns. Since the cultures were grown in isolation from sensory inputs, these results indicate that reproducible activity patterns can be generated by cortical circuits through self-organizing mechanisms. Figures adapted from Beggs and Plenz, 2003, 2004

these patterns suggests that Hebbian rules and recurrent connectivity may be sufficient conditions for stable activity patterns.

So far, these findings seem consistent with the simple attractor neural network model described previously. But does activity in these networks show evidence of becoming progressively more like a stored pattern? Is the network configuration being drawn in to an attractor? Interestingly, very few laboratories sought to examine the dynamics of activity patterns in these systems. Because of this, the attractor hypothesis in its fullest form was not truly evaluated by the work described above.

Recently, Wills and colleagues (Wills TJ et al., 2005) have made progress on this issue with an ingenious set of experiments performed in awake, behaving rats. They implanted multiple electrodes in the hippocampus and then placed rats in an arena with a base that could be progressively morphed from a circle to a square. Consistent with previous studies, they found that a particular activity pattern of firing in hippocampal neurons occurred when the rat was in the circular arena, and that this pattern was different from the pattern that occurred when the rat was placed in the square arena. After testing that these representations were stable, they then changed the shape of the arena to be like that of a square with rounded edges, intermediate between a circle and a square. When the rat was placed in this new hybrid arena, the activity pattern that initially appeared on the electrodes was not like that seen from the circular or the square arena. Over two minutes, though, the activity pattern progressively became more like either that seen from the circular arena or that seen from the square arena. This is exactly what would be expected if the network state were being drawn into an attractor. They also showed that slight morphs away from the circular shape usually resulted in network activity becoming like the pattern seen from the purely circular arena; similar effects were shown for slight morphs away from the square shape. These data were consistent with the basin of attraction seen in the simple network model presented earlier.

Although the results of this impressive experiment qualitatively agreed with all of the major features of the attractor network model, several areas still remained to be explored. It was not clear if the dynamics seen in this system was caused by the circuitry within the hippocampus or by the cooperative action of other brain areas that projected to the hippocampus. With convergence to an attractor state taking about two minutes, it seemed likely that other brain areas were involved. It would also be desirable to go beyond a qualitative description and to quantify the dynamics more precisely.

4 Tools for Quantifying Local Network Dynamics

How can the dynamics of neural networks be quantified? Fortunately methods from dynamical systems theory have been developed and these have successfully been applied to electronic circuits (Huberman BA et al., 1980), driven pendulums (Baker GL and JP Gollub , 1996), chemical reactions (Kilgore MH et al., 1981) and a host of other phenomena (Crutchfield JP et al., 1986; Nicolis G and I Prigogine, 1989). With some changes, these methods can also be used to describe both simulated and living neural networks. In this section, we will briefly describe some of these tools and note how they could be used to sharpen the description of network dynamics that was qualitatively outlined in the previous section. For a more detailed treatment of this topic, the reader is referred to the chapter by Jirsa and Breakspear in this handbook.

We will assume that we wish to describe the dynamics of a network composed of m neurons. Let xi represent a variable of interest, for example, the voltage, of neuron i. The configuration of activity in the network at time t can then specify a location X1 in m-dimensional state space:

In these coordinates, we can plot network activity at times t + 1, t + 2, t+3... t+n and we can construct a trajectory (also called an orbit) by linking these locations in state space (also called phase space) as shown in Fig. 3.

Describing the dynamics of the network amounts to describing how trajectories evolve over time. Recall that in the attractor network model, the network state will evolve toward a stored configuration. Trajectories starting within the same basin of attraction will therefore tend to flow toward each other over time, minimizing the distance between them. So to explore the at-tractor network hypothesis, we will need to quantify distances in state space.

0 0

Post a comment