Why is it useful at all to strive for formal mathematical definitions of systems? First, as described below, it allows one to pinpoint precisely what is meant by structure, function, and structure-function-relationships. Second, it allows one to predict system behavior for situations in which the system has not been observed before (see Bossel 1992 for an impressive collection of examples from biology). Third, it is the only way to fully understand how a system works and particularly, how system function could be restored if some of its components are rendered dysfunctional, e.g. by disease (Payne & Lomber 2001).

Here, we choose deterministic differential equations with time-invariant parameters as a mathematical framework; note that these are not the only possible mathematical representation of dynamic systems (see Bar-Yam 1997 for alternatives). The underlying concept, however, is quite universal: a system is defined by a set of elements with n time-variant properties altogether that interact with each other. Each time-variant property xi (1 < i < n) is called a state variable, and the n-vector x(t) of all state variables in the system is called the state vector (or simply state) of the system at time t:

Taking an ensemble of interacting neurons as an example, the system elements would correspond to the individual neurons, each of which is represented by one or several state variables. These state variables could refer to various neurophysiological properties, e.g. postsynaptic potentials, status of ion channels, etc. This touches on an important distinction: in system construction (e.g. in engineering), the relevant state variables and their mutual dependencies are usually known; in system identification (e.g. when trying to understand a biological system), however, they are not known. This means that we always require a model of the system that represents our current hypothesis about the structure of the system and how its function emerges from that structure (the structure-function relationship, SFR).

The crucial point is that the state variables interact with each other, i.e. the evolution of each state variable depends on at least one other state variable. This mutual functional dependence between the state variables of the system is expressed in a very natural fashion by a set of ordinary differential equations that operate on the state vector:

However, this description is not yet sufficient. First of all, the specific form of the dependencies f needs to be specified, i.e. the nature of the causal relations between state variables. This requires a set of parameters 0 which determine the form and strength of influences between state variables. In neural systems, these parameters usually correspond to time constants or strengths of the connections between the system elements. And second, in the case of non-autonomous systems (and these are the ones of interest to biology) we need to consider the input into the system, e.g. sensory information entering the brain. We represent the set of all m known inputs by the m-vector function u(t). Extending (2) accordingly leads to a general state equation for non-autonomous deterministic systems dx

where 0 is the parameter vector of the system. Such a model provides a causal description of how system dynamics results from system structure, because it describes (i) when and where external inputs enter the system and (ii) how the state changes induced by these inputs evolve in time depending on the system's structure. As explained below in more detail in Sect. 3, (3) therefore provides a general form for models of effective connectivity in neural systems, i.e. the causal influences that neural units exert over another (Friston 1994).

We have made two main assumptions to simplify the exposition. First, it is assumed that all processes in the system are deterministic and occur instantaneously. Random components (noise) and delays could be accounted for by using stochastic differential equations and delay differential equations, respectively. Second, we assume that we know the inputs that enter the system. This is a tenable assumption in neuroimaging because the inputs are experimentally controlled variables, e.g. changes in stimuli or instructions.1

1 Note that using time-invariant dependencies ft and parameters 9 is neither an assumption nor a restriction. Although the mathematical form of ft per se is static, the use of time-varying inputs u allows for dynamic changes in what components of fi are "activated". For example, using box-car functions that are multiplied with the different terms of a polynomial function one can induce changes from linear to nonlinear behavior (and vice versa) over time. Also, there is no principled distinction between states and time-invariant parameters. Therefore, estimating time-varying parameters can be treated as a state estimation problem.

On the basis of the general system description provided by (3) we can now state accurately, given a particular system model, what we mean by structure, function, and structure-function relationships (see Stephan 2004 for more details):

• Structure is defined by the time-invariant components of the system, i.e. the binary nature of 0 (which connections exist and which do not; see (8)) and the mathematical form of the state variable dependencies /¿.

• Function refers to those time-variant components of the system model that are conditional on its structure, i.e. x(t), but not u(t).

• The structure-function relationship (SFR) is represented by F: integrating F in time determines the temporal evolution of the system state x from time t = 0 up to a time point t, given an initial state x(0):

Was this article helpful?

## Post a comment