Overview of statistical analysis of neuroimaging
Standard statistical procedures have been almost completely established for image data analysis of PET cerebral blood flow scans. Those procedures are now being extended to single-subject analysis of three-dimensional PET data and functional MRI (fMRI) data. In what follows, we will briefly discuss the status of statistical analysis of image data and issues specific to fMRI.
Initially, analysis of PET data consisted of statistical voxel-by-voxel paired t-tests comparing cerebral blood flow images taken under task performance and matched control conditions. Later, general linear models were developed to provide a complete and unified framework for accommodating a wide range of statistical models, including parametric and factorial approaches. Statistical processing of each voxel leads to the generation of a parametric map (also termed ‘statistical parametric mapping’). Statistical analysis of PET images generally requires (a) preprocessing to reduce unwanted variance components, (b) computation of the statistics, and (c) generation of statistical inferences.
 

REALIGNMENT

In brain activation studies, voxel-wise changes in cerebral blood flow are analyzed statistically. These studies can be seriously confounded by noise introduced by head motions of the subject during scanning. To control such confounding noise, the subject’s head must be securely restrained. In practical terms, however, subjects cannot remain motionless over an extended period of scanning, and displacements (up to several millimeters) generally occur. Therefore, computational post-processing techniques are used to establish spatial registration. In many cases, PET image data are re-sampled according to the alignment parameters (three for translation, and another three for rotation) optimized to minimize the evaluation function for orthogonal rigid-body transformation [1-3]. In PET scans, radioisotope (RI) signals are transmitted from the cerebral parenchyma; in fMRI studies, strong RI signals are released from the cerebrospinal fluid as well. This phenomenon renders fMRI images more susceptible to spatial distortions than PET. In addition, field inhomogeneity related to differences in the position of the object in the gradient magnetic field of the scanner creates a more complicated problem of spatial correction in fMRI than in PET [4]. When the subject’s task involves correlated head movements, the task-induced change in cerebral blood flow can be obscured by artifactual effects caused by the head movements.
 

Anatomical Normalization

Anatomical normalization is the process for linear or non-linear transformation of single-subject fMRI image data onto a standard template (usually Tailairach’s atlas [5]), for which a wide variety of techniques have been proposed [1, 6-8]. These normalization techniques were originally developed in order to estimate the topological anatomy of low-resolution PET images [1]. Anatomical normalization is now frequently employed because of its associated advantages, including: (1) the ability to increase the statistical signal-to-noise ratio by projecting multiple-subject data into a single, spatially normalized space; and (2) the ability to convert images from multiple experiments into a common coordinate system. Statistical processing of fMRI datasets does not generally require normalization techniques, because of the many degrees of freedom. However, anatomical normalization is an effective tool for drawing a general conclusion from individual findings. Compared to PET and anatomical MRI, fMRI imaging is subject to more severe local distortions, resulting in poor normalization of images. In addition, signal dropouts occur at the air-tissue boundaries in the inferior part of the brain, because of susceptibility artifacts. Such signal loss imposes limitations on the normalization procedure.
 

Spatial Smoothing

Spatial smoothing pertains to a weighted average that weights voxels at the center of an image much more strongly than voxels at the boundaries. In PET imaging, the primary objective of the spatial smoothing technique is to reduce noise at the expense of spatial resolution. In fMRI, this technique is used to satisfy the implicit assumption of a good lattice representation required by Gaussian field theory approach (see below), which play an important part in statistical testing (i.e., correction for multiple comparisons). A good lattice representation can be assumed when the voxel size is equal to the spatial resolution of the fMRI data. It is generally accepted that, for a reasonable lattice assumption to hold, smoothing should be performed using a 2-voxel Gaussian kernel (full width at half maximum) [9].
 

Computation of Statistics

[endif]-->In order to detect a statistically significant change in regional cerebral blood flow, statistics (e.g., t-score or normalized z-score) must be calculated for each point (or voxel) and time series, using a statistical model of choice. Use of a general linear model conveniently allows for the evaluation of the effect of experimental conditions and the blood flow correlated with covariates (e.g., task performance, reaction time) within a single framework of a regression analysis [10]. In addition, this model can eliminate signal changes unrelated to the task as covariates of no interest. General linear models can be applied to both PET and fMRI imaging, although the latter requires temporal autocorrelation. This difference arises from the gap in data sampling intervals between MRI and PET. The T2*  time constant for cerebral blood flow changes induced by neuronal activation has been estimated at approximately 5 seconds [11]. As PET data are generally obtained at 10-minute inter-scan intervals, acquired datasets can be regarded as independent of each other. However, MRI data are sampled at time intervals of several seconds; therefore, each successive acquisition cannot be considered as a completely independent measure of the underlying changes. Instead, adjacent measurements must be considered as correlated to one another; hence, temporal autocorrelation is required. The bias that may be introduced by temporal autocorrelation, depending on the degree of freedom, can be corrected for if the temporal smoothness is known [12, 13].
 

Statistical Inference

The null hypothesis used for deriving the t-score for a voxel is specific to the voxel. It follows that a huge number of null hypotheses will be needed, if one is to statistically test a change over the entire brain field. This leads to problems arising from multiple comparisons. The main concern here is determining the probability of voxels arising above a certain threshold across the observed field; this can be estimated by applying Gaussian field theory. More specifically, the estimation is based on the theorem that local clusters of a spatially auto-correlated Gaussian random field above a sufficiently high threshold occur randomly, according to a Poisson process [9, 14, 15]. Several attempts have been made to control for false-positive rates resulting from multiple comparisons; these approaches involve the application of Gaussian random fields to an entire brain region based on Gaussian field theory, and the determination of the similarities between spatially proximate voxels (often termed ‘spatial autocorrelation’ or ‘smoothness’).[10, 15]
 

References

  1. Friston KJ, Ashburner J, Frith CD, Heather JD, Frackowiak RSJ: Spatial registration and normalization of images. Hum Brain Mapp 1995; 2: 165-189.
  2. Minoshima S, Berger KL, Lee KS, Mintun MA: An automated method for rotational correction and centering of three-dimensional functional brain images. J Nucl Med 1992; 33: 1579-1585.
  3. Woods RP, Cherry SR, Mazziotta JC: Rapid automated algorithm for aligning and reslicing PET images. J Comput Assist Tomogr 1992 ;16: 620-633.
  4. Friston KJ, Williams S, Howard R, Frackowiak RS, Turner R: Movement-related effects in fMRI time-series. Magn Reson Med 1996; 35: 346-355.
  5. Talairach J, Tournoux P Co-planar stereotaxic atlas of the human brain. New York, Thieme 1988.
  6. Fox PT, Perlmutter JS, Raichle ME: A stereotactic method of anatomical localization of positron emission tomography. J Comut Assist Tomogr 1985; 9: 141-153,
  7. Minoshima S, Koeppe RA, Mintun MA, Berger KL, Taylor SF, Frey KA, Kuhl DE: Automated detection of the intercommissural line for stereotactic localization of functional brain images. J Nucl Med 1993; 34: 322-329.
  8. Minoshima S, Koeppe RA, Frey KA, Kuhl DE: Anatomic standardization: linear scaling and nonlinear warping of functional brain images. J Nucl Med 1994;35: 1528-1537.
  9. Friston KJ, Holmes A, Poline J-B, Price CJ, Frith CD: Detecting activations in PET and fMRI: levels of inference and power. Neuroimage 1996; 4: 223-235.
  10. Friston KJ, Holmes AP, Worsley KJ, Poline JB, Frith CD, Frackowiak RSJ: Statistical parametric maps in functional imaging: A general linear approach. Hum Brain Mapp 1995; 2:189-210.
  11. Bandettini PA, Wong EC, Hinks RS, Tikofsky RS, Hyde JS: Time course EPI of human brain function during task activation. Magn Reson Med 1992; 25: 390-397.
  12. Worsley KJ, Friston KJ: Analysis of fMRI time-series revisited-again. Neuroimage 1995; 2: 173-181.
  13. Seber GAF: Linear regression analysis. New York: Wiley,1977.
  14. Adler RJ: The geometry of random fields. New York: John Wiley & Sons, 1981; p133
  15. Friston KJ, Worsley KJ, Frackowiak RSJ, Mazziotta JC, Evans AC: Assessing the significance of focal activations using their spatial extent. Hum Brain Mapp 1994; 1:210-220.