Fig. 7. The model captures important features of the data. A, When the probability of occurrence is plotted against avalanche size, a nearly straight line is formed in a log-log graph. This line indicates that the relationship between probability and size can be described by a power law: P(S) = S-a, where P is the probability of observing an avalanche of size S, and S is the total number of electrodes activated in the avalanche. For a critical branching process, the exponent of the power law, a, is predicted to be — 3/2 (dashed line). Filled circles are from 5 hours of spontaneous activity in an acute slice, while open circles show results from the model when the branching parameter o = 1. Note that the power law begins to cut off near S = 35, since the number of electrodes in the array is 60. B, Reproducible patterns of activity generated by the model. Each large white square represents the pattern of active electrodes on the array at a given 4 ms time step. Active electrodes are shown as small black squares. Patterns shown are all five time steps long. Note that patterns within groups 1 through 3 are not exactly the same, even though all groups were statistically significant. C, Reproducible patterns generated by acute cortical slices. Note general similarity to patterns produced by the model. Data from acute cortical slices are generally similar to data produced by organotypic cortical cultures (compare to patterns shown in Fig. 2D), suggesting common principles of operation. Because the model reproduces general features of the data, it may serve as a useful tool for exploring links between connectivity and dynamics
1987; Yeomans JM, 1992), forest fire sizes (Malamud BD et al., 1998), earthquake magnitudes (Gutenberg B and CF Richter, 1941) and sizes of simulated sand pile avalanches (Paczuski M et al., 1996). Since the sizes of activity patterns from cortical cultures also fit a power law, we called them "neuronal avalanches." Power law distributions also suggest, but do not prove, that a system is operating near a critical point (Bak P, 1996; Jensen HJ, 1998). The power law is a consequence of the branching parameter being close to unity. When o =1, activity propagates in a nearly sustained manner but eventually dies out because transmission is stochastic. Another aspect of the data that can be reproduced by this simple model is the reproducible activity patterns themselves. As shown in Fig. 7B, the patterns produced by the model are qualitatively similar to those produced by cortical slices (see Fig. 2D for similar patterns produced by cultures). These patterns are caused by inequalities in the connection strengths of the model. Although each transmission is probabilistic, there will be some preferred patterns of activity in the network because some connections are stronger than others. Because this parsimonious model qualitatively reproduces two main features from living network data, it seems plausible that we could use the model to predict how connectivity will influence dynamics in real neural networks.
How is dynamics in the model affected by changes in the branching parameter o? To explore this, we can tune all units in the network to have a given o. We then let the network evolve over time from many pairs of closely spaced starting configurations. By measuring the distances between trajectories from thousands of pairs of configurations, we can estimate the Lyapunov exponent X for the network. For o < 1, transmission probabilities are weak and avalanches tend to die out because the average number of descendants from a given ancestor is less than one. This causes trajectories to become more similar over time, since fewer and fewer units are active and distances decrease. In this case, the dynamics is predominantly attractive and X < 0. For o « 1, connections between units are stronger and activity is nearly sustained since the average number of descendants from a given ancestor is one. Here, distances between nearby trajectories are preserved and X « 0, indicating neutral dynamics. For o > 1, the number of active units in the network increases with every time step, causing slight distances in state space to be amplified. As a result, trajectories tend to diverge in state space. The Lya-punov exponent is X > 0, indicating chaotic dynamics with its typical sensitive dependence on initial conditions. These results clearly suggest that the sum of connection strengths, or weights, can determine the dynamical regime of a network (Fig. 8A).
But what happens if we begin to change the distribution of weights coming from each unit? In simulated neural networks, this question was pursued by Bertshinger and Natschlager (Bertschinger N and T Natschlager, 2004), who found that dynamics could be tuned by changing the variance of a Gaussian weight distribution. They showed that small variances led to attractive dynamics while large variances led to chaotic dynamics. Inspired
Was this article helpful?