Tuesday, June 12, 2012

Smoothing in AFNI

What Smoothing Is

Smoothing, the process of introducing spatial correlation into your data by averaging across nearby data points (in fMRI datasets, nearby voxels) is an essential step in any preprocessing stream. Traditionally, smoothing applies a Gaussian filter to average signal over nearby voxels in order to summate any signal that is present, while attenuating and cancelling out noise. This step also has the advantage of spreading signal over brain areas that have some variation across subjects, leading to increased signal-to-noise ratio over larger cortical areas at the loss of some spatial specificity. Lastly, smoothing mitigates the fiendish multiple comparisons problem, since voxels are not truly independent in the first place (neither are timepoints, but more on this at a later time).

Before (left) and after (right) smoothing. Note that there is much more spatial detail in the image before smoothing is applied.

Usually this smoothing step is taken care of by your fMRI software analysis package's built-in smoothing program, but something I learned back at the AFNI bootcamp is that images coming off the scanner are already smoothed, i.e., there is already some inherent spatial correlation in the data. This makes sense, given that functional imaging datasets are acquired at relatively low resolution, and that voxels with a given intensity will most likely be immediately surrounded by other voxels with similar intensities. In the above image on the left which has not been smoothed, voxels that are brighter tend to cluster together, while voxels that are darker tend to cluster together. This spatial correlation will be increased with any additional step requiring spatial interpolation, including coregistration (aligning all volumes in a run to a reference volume), normalization (warping an image to a template space), and smoothing.

Smoothing Programs in AFNI: 3dmerge and 3dBlurToFWHM

In the good old days, the staple program for blurring in AFNI was 3dmerge with the -blur option. Since 2006, there has been a new program, 3dBlurToFWHM, which estimates the smoothness of the dataset, and attempts to smooth only to specified blur amounts in the x, y, and z directions. (Another thing I didn't recognize, in my callow youth, was that there can indeed be different smoothness amounts along the axes of your dataset, which in turn should be accounted for by your smoothing algorithm.) 3dmerge will add on the specified smoothness to whatever smoothness is already present in the data, so the actual smoothness of the data is not what is specified on the command line with 3dmerge. For example, if 3dmerge with a 6mm Full-Width at Half-Maximum (FWHM) is applied to a dataset that already has smoothness of about 3mm in the x, y, and z directions, the resulting smoothness will be about 8-9mm. (This example, similar to many simulations carried out in the lab, is not a purely linear addition, but is relatively close.)

3dBlurToFWHM, on the other hand, will begin with estimating the smoothness of your dataset, and then iteratively blur (i.e., smooth) the data until it reaches the desired level of smoothness in the x, y, and z directions. These can be specified separately, but often it is more useful to use the same value for all three. For example, the following command will blur a dataset until it reaches approximately 6mm of smoothness in each direction:

3dBlurToFWHM -FWHM 6 -automask -prefix outputDataset -input inputDataset

Applying this to a dataset that has been coregistered and normalized will produce output that looks like this:

Note that the first set of numbers highlighted in red are the estimates produced by 3dBlurToFWHM, and the last set of numbers of red are estimates from a separate program, 3dFWHMx. The reason for the discrepancy is that 3dBlurToFWHM uses a slightly faster and less accurate estimator when blurring; however, the two sets of numbers are functionally equivalent. In any case, the resulting blur estimates are very close to the specified FWHM kernel, typically within 10% or less.

What Mask to Use?

Often using a mask generated after the coregistration step (i.e., the 3dvolreg command) is sufficient, such as full_mask+tlrc or mask_epi_extents+tlrc. The -automask option can also be used, which usually works just as well. The only considerations to take into account are that masking out non-brain voxels will limit your field of view; some experienced users advise not masking until after performing statistical tests, as artifacts or consistent signal outside of the brain can point to hardware problems or issues with your experimental design.

What to Use for the -blurmaster Option?

According to the AFNI help for 3dBlurToFWHM, the -blurmaster option can be used to control the smoothing process, which can lead to slightly better smoothing results. As the FWHM is supposed to represent the underlying spatial structure of the noise, and not the spatial structure of the brain's anatomy, using a dataset of residuals is often a good choice here; for example, the output of -errts in 3dDeconvolve. However, this requires running 3dDeconvolve first, and then using the errts dataset in the blurmaster option all over again. This is usually more trouble than it's worth, as I haven't found significant differences between having a blurmaster dataset and not having one.

Estimating Smoothness with 3dFWHMx

After blurring your data, 3dFWHMx can be used to estimate the smoothness of a given dataset. Useful options are -geom, to estimate the geometric mean of the smoothness, and -detrend, which accounts for outlier voxels. For example,

3dFWHMx -geom -detrend -input inputDataset

What about this 3dBlurInMask Thing?

3dBlurInMask does much the same thing as 3dBlurToFWHM, except that you can use multiple masks if you desire. This procedure may also be useful if you plan to only blur within, say, the gray matter of a dataset, although this can also lead to problems such as biasing signal in certain locations after the grey matter masks have all been warped to a template space, as there is far greater variability in smoothing across masks than there is applying a smoothing kernel to the same averaged mask that incorporates all of the brain voxels across all subjects.

How to Estimate Smoothness in SPM

SPM comes with it's own smoothness estimation program, spm_est_smoothness. It requires a residuals dataset and a mask for which voxels to analyze. For example,

[fwhm, res] = spm_est_smoothness('ResMS.hdr', 'mask.hdr')

Will return the estimated FWHM in the x, y, and z directions, as well as a structure containing information about the resels of the dataset.

Why it Matters

I tuned into this problem a couple months ago when one of my colleagues got nailed by a reviewer for potentially not estimating the smoothness in his dataset appropriately. Evidently, he had used the smoothing kernel specified in his preprocessing stream when calculating cluster correction thresholds, when in fact the actual smoothness of the data was higher than that. It turned out to be a non-significant difference, and his results held in either case. However, to be on the safe side, it always helps to report estimating your smoothness with an appropriate tool. If your results do significantly change as a result of your smoothness estimation approach, then your result may not have been as robust to begin with, and may require some more tweaking.

(By tweaking, of course, I mean screwing around with your data until you get the result that you want.)


  1. How can the blur estimates be smaller than the FWHM kernel?
    Maybe the estimation procedure is not so perfect?

    1. Hi,

      Using 3dBlurToFWHM is different from traditional blurring processes (such as 3dmerge), in that it will estimate the smoothness already existing in the raw dataset and then iteratively blur it until it reaches the specified threshold. To be conservative, it appears to keep the last iterative blur before it crossed the threshold.

      If you were to apply blur estimates to a dataset that had been smoothed with a more traditional algorithm (i.e., adding the specified smoothing kernel on top of the existing smoothness), the blur estimate would assuredly be greater than the FWHM kernel.


  2. Is it true that "3dBlurInMask does much the same thing as 3dBlurToFWHM"? My understanding is that 3dBlurInMask does not do the estimation of pre-existent smoothness in the data.

    Also, any idea what the automask is, exactly? Is it an estimated mask of all voxels that fall within the brain (including gray matter, white matter, and ventricles)?

    1. Hi Daniel,

      From what I can tell, yes, the difference between 3dBlurInMask and 3dBlurToFWHM is that 3dBlurInMask does not estimate the smoothness before applying a smoothing kernel; it just adds the specified smoothness on top of what is already there. I haven't done any tests, so I could be wrong, but the help text is clear about the difference.

      Also, yes, the automask is an estimated mask of all the voxels that fall within the brain. If you want a more precise mask (e.g., excluding ventricles), you can mess around with the 3dAutomask and 3dcalc tools, and then use that mask for the -mask option. See the mask_group section of a typical script generated by uber_subject.py for an example.

      Hope this helps!


  3. How can I smooth with a different FWHM for each direction? say 3mm in x, 4mm in y etc..
    Those programs only accept a single number for xyz. There is another tool called 3danisosmooth, but it doesn't seem to allow this either... :(

    1. Hey AminoAcids,

      Unfortunately, I don't know of a program that will allow you to do that; there must be some kind of functionality for it to enable a program like 3dBlurToFWHMx to work, but I haven't found it yet.


  4. Hi, can I use the fwhm output of the spm_est_smoothness function when running a Monte Carlo for computing a cluster extent?

    1. Hi there,

      Yes, you can. However, I would recommend using the smoothness estimates from the group level instead of averaging over individuals since you want to know the false positive rate of your clusters at the group level. In my experience, the smoothness estimates of 3dFWHMx on the ResMS.hdr file and SPM's smoothness estimations (stored in SPM.xVol.FWHM, in mm) give similar results.

      Also, be aware that older versions of 3dFWHMx underestimate smoothness and lead to increased false positives. This is because the smoothness at the group level is not normally distributed, and is better modeled as a mixture of Gaussian and exponential distributions. Newer versions of 3dFWHMx (since the beginning of 2016) include a -ACF option which estimates the relative amount of Gaussian and exponential shapes that best describe the smoothness of the data. These parameters can then be used with 3dClustSim to calculate more accurate cluster thresholds. See the documentation of the newest versions of 3dFWHMx and 3dClustSim for more details.



  5. Thank you very much.

    How do you explain, though, that uber_subject still uses 3dmerge, if 3dBlurToFWHM is better?

    Thanks again,

    Emmanuelle Renauld