Tuesday, March 27, 2012

AFNI Bootcamp: Day 2


Today was a walkthrough of the AFNI interface, and a tutorial for how to view timeseries data and model fits. Daniel started off with a showcase of AFNI’s ability to graph timeseries data for each stage of preprocessing, and how it changes as a result of each step. For example, after scaling the raw MR signal to a percentage, the values at each TR in the timeseries graph begin to cluster around the value of 100. This is a totally arbitrary number, but allows one to make inferences about percent signal change, as opposed to simply parameter estimates. Since this percent signal change is done for each voxel, as opposed to grand mean scaling in SPM which divides each voxel’s value by the mean signal intensity across the entire brain, it becomes more reasonable to talk in terms of percent signal change at each voxel.

Another cool feature is the ability to overlay the model fits produced by 3dDeconvolve on top of the raw timeseries. This is especially useful when looking all over the brain to see how different voxels correlate with the model (although this may be easier to see with a block design as opposed to an event-related design). You can extract an ideal time series from the X matrix output by 3dDeconvolve by using 1dcat [X Matrix column] > [output 1D file], and then overlay one or more ideal timeseries by clicking on Graph -> 1dTrans -> Dataset#. (Insert picture of this here).

One issue that came up when Ziad was presenting, was the fact that using dmBLOCK as a basis function to convolve an onset with a boxcar function does not take individual scaling into account. That is, if one event lasts six seconds, and another lasts ten seconds, they will be scaled by the same amount, although in principal they should be different, as saturation has not been achieved yet. I asked if they would fix this, and they said that they would, soon. Fingers crossed!

Outside of the lectures, Bob introduced me to AlphaSim’s successor, ClustSim. For those who haven’t used it, AlphaSim calculates how many contiguous voxels at a you need at a specified uncorrected threshold in order to pass a corrected cluster threshold. That is, AlphaSim runs several thousand simulations of white noise, and calculates the extent of uncorrected voxels that would appear at different levels of chance. ClustSim does the same thing, except that it is much faster, and can calculate several different corrected thresholds simultaneously. The uber scripts call on ClustSim to make these calculations for the user, and then write this information into the header of the statistics datasets. You can see the corrected cluster thresholds for each cluster under the “Rpt” button of the statistics screen.

On a related note, ClustSim takes into account smoothing that was done by the scanner before any preprocessing. I had no idea this happened, but apparently the scanners are configured to introduced very low-level (e.g., 2mm) smoothing into each image as it is output. Because of this, the uber scripts estimate the average amount of smoothness across an entire subject in the x, y, and z directions, which are not always the same. Therefore, if you used a smoothing kernel of 4mm, your estimated smoothness may be closer to 6mm. This is the full width at half max that should be used when calculating cluster correction levels in AlphaSim or ClustSim. Another tidbit I learned is that Gaussian Random Fields (SPM’s method of calculating cluster correction) is “difficult” at smoothing kernels less than 10mm. I have no idea why, but Bob told me so, so I treat it as gospel. Also, by “difficult”, I mean that it has a hard time finding a true solution to the correct cluster correction level.

I found out that, in order to smooth within a mask such as grey matter, AFNI has a tool named 3dBlurInMask for that purpose. This needs to be called at the smoothing step, and replaces 3dmerge or 3dFWHMx, whichever you are using for smoothing. This sounds great in theory, since most of the time we are smoothing both across white matter and a bunch of other crap from outside the brain which we don’t care about. At least, I don’t care about it. The only drawback is that it suffers from the same problem as conventional smoothing, i.e. that there is no assurance of good overlap between subjects, and the resulting activation may not be where it was at the individual level. Still, I think it is worth trying out.

The last person I talked to was Gang Chen, the statistician. I asked him whether AFNI was going to implement a Bayesian inference application anytime soon, for parameter estimation. He told me that such an approach was unfeasible at the voxel level, as calculating HDIs are extremely computationally intensive (just think of how many chains, samples, thinning, etc, and then multiply that by tens of thousands of individual tests). Although I had heard that FSL uses a Bayesian approach, this isn’t really Bayesian; it is essentially the same as what 3dMEMA does, which is to weight high-variability parameter estimates less than high-precision parameter estimates. Apparently a true-blue Bayesian approach can be done (at the second level), but this can take up to several days. Still, it is something I would like to investigate more, and to compare results from AFNI to FSL’s Bayesian method, and see if there is any meaningful difference between the two.

4 comments:

  1. Hi Andrew,
    I would like to do a 3ClustStim on VBM results. However I don't really know where to find the -nxyz parameter. Do you know where I can get that? Maybe in the SPM.mat? For -fwhmxyz I used a 8 mm smoothing kernel size, loading a contrast and pushing whole brain I think it's something like 13.7 14.3 13.2 . Does it seem correct?

    ReplyDelete
    Replies
    1. Hey DGM,

      I haven't done ClustSim on VBM results, but you should be able to get the nxyz details from the SPM.xVol.DIM field. You should also be able to see the dimensions by opening up the volume in the Image Viewer through the SPM GUI.

      And yes, those smoothness estimates seem correct. Remember that there is already some smoothness in the data before you apply any additional smoothing to it, and that it is going to be slightly different in the x-, y-, and z-directions. If you want to restrict the amount of smoothness to a specific level, then look into a tool such as AFNI's 3dBlurInMask.


      Best,

      -Andy

      Delete
  2. Hi Andrew,

    Thank you for your blog, it is really helpful ! I ran all of my fMRI analyses (preprocessing, 1st-level and 2nd-level) on SPM, however my reviewers suggested that I used 3dClustSim on these fMRI results. I finally get how 3dClustSim works. I took the residual smoothing needed in 3dClustSim directly from SPM with SPM.xVol.FWHM. However, someone recommended me to use 3dFWHMx and not SPM to estimate the residual smoothness. What do you think ? And also, how 3dFWHMx works ? What are the options I need to use excep -acf ?

    A Huge thanks

    Josiane Bourque

    ReplyDelete
    Replies
    1. Hi Josiane,

      I would use 3dFWHMx on the Res.hdr file that is output after SPM's group-level analysis. For example,

      3dFWHMx -acf -mask mask.hdr ResMS.hdr

      The -acf option will give you three parameters needed for an accurate cluster correction with 3dClustSim. For example, the output of the command above might be something like the following:

      0.578615 6.37267 14.402 16.1453

      The first three numbers are the parameters, and the last number is the average smoothness in the x-, y-, and z-directions. You can then use those numbers in 3dClustSim:

      3dClustSim -mask mask.hdr -acf 0.579 6.373 14.40 -athr 0.05 -pthr 0.01

      Note that this command assumes a voxel-wise uncorrected, or "primary" threshold, or 0.01, and looks for cluster sizes that occur less than 5% of the time due to chance. I would recommend using a primary threshold of 0.001 or stricter, to be on the safe side.


      Hope this helps!

      -Andy

      Delete