Fpl Fp2 F7 F3 F? F4 F8 T3 C3 Cz C4 T4 T5 P3 P7. P4 T6 Ol O? 02 Fp?.

Fpl Fp2 F7 F3 F? F4 F8 T3 C3 Cz C4 T4 T5 P3 P7. P4 T6 Ol O? 02 Fp?.

Fig. 1. Coherence analysis for a healthy human subject in sleep stage 2. Each box in the matrix is a graph of a coherence function connecting the channel marked below the relevant column and channel marked left of the relevant row. Frequency runs along the horizontal axes (0-30Hz), the function value on the vertical axes (0-1). Ordinary coherences are plotted above the diagonal, multiple coherences on the diagonal and partial coherences below the diagonal of the matrix of graphs. Reprinted from (Kaminski et al. 1997) © 1997 with permission from International Federation of Clinical Neurophysiology indicating strong relations of every channel with the rest of the set. While ordinary coherences (above the diagonal) are quite big and appear in certain groups, partial coherences have significant values only for few specific connections. Closer topographical analysis of those results revealed that the value of ordinary coherence depends mostly on the distance between electrodes. On the other hand, partial coherences mainly connect neighboring sites; for more distant locations they usually decrease quickly.

Coherence analysis is a popular and valuable tool in multichannel data analysis. When properly applied, it can quickly give insight into the connections pattern. Coherence results can be combined with other methods to get precise information about the network properties of the investigated system.

4 Parametric Modeling

In order to analyze data in the frequency domain, spectral quantities (e.g. X(f), S(f), etc.) have to be estimated. As said before, one very popular method is the Fourier transform, which gained popularity due to its ease of use. The Fast Fourier Transform (FFT) algorithm evaluates the spectral power of a signal in a fast and effective way. However, there are concurrent methods of spectral estimation, based on a parametric description of time series. In this approach a stochastic model of data generation is assumed. The model is fitted to the data resulting in a set of model parameters. The whole analysis is then conducted on the model parameters, not on the data samples.

The parametric approach has certain advantages over Fourier analysis (which belongs to the class of nonparametric methods, applied directly to the data). Parametric spectral estimates perform much better than nonpara-metric ones when applied to short data segments. In the Fourier approach the assumption is made that the time series are infinite or periodic. In practice, a finite and stochastic data epoch has to be analyzed. Finite stochastic datasets are then expressed as a multiplication of the signal by a window function, zeroing signal values outside the window. The window function induces distortions in the estimated spectra known as sidelobes. In the parametric approach, the validity of the model over the whole time scale is assumed, there is no need to introduce a window function and parametric spectra are smooth and free of sidelobe effects. More detailed discussions about these problems can be found in theoretical signal analysis textbooks (Kay 1988; Marple 1987) comparisons between Fourier methods and linear models can be found in (Isaksson et al. 1981; Blinowska 1994; Spyers-Ashby et al. 1998).

In biomedical data analysis practice two models are of primary importance: autoregressive (AR) and autoregressive-moving average (ARMA) models.

The multivariate autoregressive model (MVAR, MAR, VAR) is constructed by expressing X(i)—a value of a (multivariate) process X at a time t—by its past values taken with certain coefficients A and a (multivariate) white noise value E(t).

X(t) = (X1(t),X2(t),.. .,Xk (t))T, E(t) = (Ei(t), E2(t),..., Ek(t))T p

The A coefficients are the model parameters. The number p (of past samples taken into account) is called the model order.

Note that for given N time points of a fc-variate process X, we must estimate pk2 parameters (p matrices A of size k x k) from Nk data points.

Assuming A (0) = I (the identity matrix) and A(j) = — A (j), (8) can be rewritten in the form:

After transforming (9) into the frequency domain we obtain (Marple 1987):

(At is the data sampling interval). This equation leads to the observation that the signal in the frequency domain X(/) can be expressed as a product of H(/) and the white noise transform E(/). Because the spectral power of white noise is flat over frequency, the information about spectral properties of the process is contained in the matrix H. This matrix is called the transfer matrix of the system.

The power spectrum of the signal is then given by

S(/) = X(/)X*(/) = H(/)E(/)E*(/)H*(/) = H(/)VH*(/) (11)

where V denotes the input noise variance matrix (not dependent on frequency). The matrix V is evaluated from the data during the model fitting.

The autoregressive-moving average (ARMA) model of time series is described by:

where B(i) are parameters in addition to AR models; they are called a moving-average part. ARMA model can be viewed as an extension of AR model. Although the ARMA model is more universal than the AR model, it is rarely used in biomedical signal analysis. One reason is that the ARMA

model parameters estimation procedure is more complicated than the algorithms for AR model fitting. It often starts from an AR part estimation and then the MA parameters B are estimated separately. Second, it can be shown that the spectrum of the AR model can be fitted especially well to signals of a form of periodic components embedded in the noise (Franaszczuk and Blinowska 1985, Marple 1987). The rhythmic components are represented by peaks in the spectrum. Biomedical signals in general are of such type. A model with the B parameters can, in addition to modeling frequency peaks, describe dips in the spectrum. However, this signal feature is not typical for biomedical signals, so the ARMA model is seldom used.

Attempts to utilize AR models in biomedical signal processing date back to the 1960's (Achermann et al. 1994; Fenwick et al. 1969; Zetterberg 1969; Zetterberg 1973; Gersch 1970; Fenwick et al. 1971). AR modeling became popular with the wider accessibility to computers. Autoregressive models, especially in the multivariate version, are now quite often used, in particular in EEG and MEG analysis. Overviews of the linear modeling in applications to biomedical signals can be found in the literature (Jansen 1985; Kemp and Lopes da Silva 1991; Kelly et al. 1997; Kaminski and Liang 2005).

5 Causal Analysis

5.1 Defining Causal Estimators

Proper analysis of cross-relations in a multivariate dataset can provide information about causal relations between time series, for instance, sources of a signal can be identified. Before analyzing causal influences, causality for time series must be defined. The definition given by (Granger 1969), formulated originally for economic time series, recently became popular in biomedical data analysis. Its definition is expressed in terms of linear models of time series and can be easily applied to a parametric description of data.

Granger's original definition is based on predictability of time series. Let us assume that we try to predict the value of a process X1 at a time t using p (an arbitrary number) past values of that process:

We get a prediction error e. If the prediction can be improved by adding to it some (q) values of another time series X2 then we call X2 causal for the Xi series.

Xi(t) = £ A'n(j)X1(t - j) + ^ A12 j)X2(t - j)+e'(t) (14)

The improvement of the prediction should be understood in a statistical sense, measured for instance by comparing the variances of the errors e and e'.

This definition can be extended to an arbitrary number (k) of signals. In that case we predict the signal Xi(t) using all other available signals. That is to say if a signal Xm is causal for the Xi the prediction error variance should be compared in two situations: when the signal Xm is either included or not included in the prediction k pi

Historically, there were several attempts of defining various causal measures. Although the phase of coherence seems to be a good proposition for such a measure, in practice the ambiguity of phase values (which are defined modulo 2n) makes it difficult to utilize. Among proposed functions there were: various modifications and decompositions of coherences (like directed coherence (Baccala and Sameshima 1998; Baccala et al. 1998; Saito and Harashima 1981), analysis of feedback loops approach (Caines and Chan 1975; Gevers and Anderson 1981; Schnider et al. 1989), information theory measures (Kamitake et al. 1984; Saito and Harashima 1981; Gersch and Tharp 1976; Liang et al. 2001) and linear and nonlinear extensions to various versions of the correlation function (Chen et al. 2004; Freiwald et al. 1999; Chavez et al. 2003). In this chapter we will focus on methods based on and taking advantage of a parametric description of time series. Although applications of parametric (AR) modeling in causal relations analysis of biomedical data appear as early as the 1960's and 1970's (Whittle 1963; Gersch and Yonemoto 1977; Gersch 1972; Blinowska et al. 1981), it was often considered for bivariate systems rather than for an arbitrary number of signals. A truly multichannel measure, the Directed Transfer Function (DTF) was proposed in 1991 (Kaminski and Blinowska 1991). The DTF function operates in the frequency domain. Its construction is based on the elements of the transfer matrix H(f) of an AR model fitted to the whole multivariate system. The element Hij (f) can be related to the "amplitude" of the connection between input j and output i at frequency f. In the simplest (non-normalized) form the DTF is defined as otj (f ) = Hij (f )|2 (16)

Alternatively it can be calculated in a normalized form (Kaminnski and Blinowska 1991):

representing a ratio between the inflow to channel i from channel j to all inflows to channel i. The choice between the normalized and non-normalized version of DTF should be made according to a particular application.

The DTF, a measure constructed from elements of the transfer matrix H, shows the total transmission between channels j and i, summed over all paths of the transmission. To indicate direct causal relations between channels a partial causal measure is needed. Partial Directed Coherence (PDC) was proposed by Baccala and Sameshima (Baccalâ and Sameshima 2001; Sameshima and Baccalâ 1999). The PDC is constructed from A(f ), elements of the Fourier transformed matrices of model coefficients A(t):

Was this article helpful?

0 0

Post a comment