Data Processing

Once data have been acquired, they need to be pre-processed. The purpose of this procedure is to remove various kinds of artefacts in order to maximize the sensitivity for later statistical analysis.

After reconstructing row data into images, looking like brain slices, slice-timing correction occurs. Actually, each slice is acquired at slightly different times, but further analysis needs to adjust the data so that it appears that all voxels within one volume have been acquired at exactly the same time. Motion correction follows: by using rotation and translocation, each volume is transformed to be aligned with all the others [39]. Often but not always, a spatial blurring of each volume is taken into account, with the aim of reducing noise without significantly affecting the activation signal. Afterwards, overall intensity level is adjusted so that all volumes have the same mean intensity (intensity normalization) [34]. The final step is the filtering of each time series of voxels by linear or non-linear tools, in order to reduce low and high frequency noise. Now data are ready for statistical data processing [35].

A detailed description of statistical analysis is beyond the purpose of this paper, so only a few general points will be described and single subject data only are referred to. Statistical analysis is carried out to determine which voxels are activated by the stimulation. Various possible methods may be used to compute the significance level of these activated voxels. The principle is a model-based method (e.g. [10]), where an expected response is generated and compared with the data. Commonly, each time series of voxels is analysed independently (univariate analysis), in a General Linear Model (GLM). To get the best fit of the model to the data, the „stimulus function", which is often a sharp on/ off waveform, is smoothed, delayed and converted into the haemodynamic response function (HRF) [41]. Once the model fits the data, an estimate of the „goodness of fit" is found, expressed as a parameter estimate (the estimated value), which is converted, dividing it by the standard error, into the t value. Proper standard statistical transformations convert the t value into P (probability) or Z statistics, which contain the same statistical information: how significant the data are [40]. An important issue concerning these methods is the arbitrary establishment of the statistical threshold, above which the activity is significant, andbelow which data are rejected. If the significant (P) threshold is applied to every voxel in the brain, the huge amount of resulting voxels makes the number of false positives too high to be accepted; in this case a Bonferroni's correction is used (the significance level at each voxel is divided by the number of voxels, to correct the number of comparisons to be made). Otherwise, one maytake into account clusters of activated voxels before estimating the significance. This method is more sensitive to activation but more arbitrary. Whatever the choice, the resulting output is a statistical map, which indicates those points where the brain has activated in response to the stimulus.

The above analysis concerns the low-resolution fMRI series, acquired during the performance of the task. An fMRI experiment typically includes also a single high quality structural series, useful to better localize anatomically the task-related regions of increased signal. This isovolumetric morphological series needs in turn to be pre-processed [34], segmented [20] and coregistered with fMRI series, which, at last, are superimposed on the volumetric acquisition. In this way, activation areas may be viewed in the context of a good quality brain image (Fig. 19.1).

Unraveling Alzheimers Disease

Unraveling Alzheimers Disease

I leave absolutely nothing out! Everything that I learned about Alzheimer’s I share with you. This is the most comprehensive report on Alzheimer’s you will ever read. No stone is left unturned in this comprehensive report.

Get My Free Ebook


Post a comment