Fathers and teachers, I ponder: What is hell? I maintain that it is the suffering of being unable to love. Oh, and it also might be the fact that you can't get an accurate smoothness estimate on second-level results. I mean really, what's up with that?
-Father Zossima
Last week I received an email from a fellow neuroimager asking about smoothness estimates for second-level results. As I discussed in a previous post, this is an important question when calculating cluster correction thresholds, as the smoothing kernel applied to FMRI data is not the same smoothness that should go into cluster correction estimates; and failing to account for this will lead to shame and ridicule. (The exception to this would be using AFNI's 3dBlurInMask, but I assume that most people use either AFNI's 3dmerge or SPM's spm_smooth function.)
To highlight this distinction, imagine a cake - the cake itself is the smoothness on the images that come out of the scanner. Now, pipe some frosting onto the cake. That frosting is the extra smoothing applied on top of those images. The baker piping the frosting is the analysis package you are using, and you can either tell him to add more frosting or less. Sometimes he will malfunction and give you an error message, and you don't understand why he is not working, but you cannot fire him, as you have no other alternative. In any case, when someone asks you how many calories are in the cake, you do not lie and tell them only the calories that are in the frosting; you must account for the underlying cake as well. And then there are times where you will dip your finger into the frosting, just to get a little taste, but your pudgy fingers leave a noticeable divot; and you attempt to fill in the gap with frosting from the edges, but it only ends up looking worse.
In sum, if you are using SPM to estimate smoothness for second-level contrasts or ANOVAs, do not simply use spm_est_smoothness on the ResMS.hdr images output from that analysis. I have tried and I have failed to get any meaningful results, and I have also been unable to find an answer on the message boards or listservs. Therefore, instead of focusing on the second-level analyses, I recommend averaging the smoothness estimates across each subject, and using those averages for your cluster correction estimates. This can be done by looping over each subject, applying spm_est_smoothness to each individual's ResMS.hdr file, and storing the results in a matrix, after which the results are averaged in the x-, y-, and z-directions. The following .m file allows you to do this.
I confess that I know little else about smoothing; this was the only reasonable approach that I could find, and I have little intuition about why spm_est_smoothness gives such wonky results on second-level residuals datasets. It may be that these are to remain mysteries, too profound for our common clay.
Thanks to Jillian Hardee, who never smoothed a dataset she didn't like.
To highlight this distinction, imagine a cake - the cake itself is the smoothness on the images that come out of the scanner. Now, pipe some frosting onto the cake. That frosting is the extra smoothing applied on top of those images. The baker piping the frosting is the analysis package you are using, and you can either tell him to add more frosting or less. Sometimes he will malfunction and give you an error message, and you don't understand why he is not working, but you cannot fire him, as you have no other alternative. In any case, when someone asks you how many calories are in the cake, you do not lie and tell them only the calories that are in the frosting; you must account for the underlying cake as well. And then there are times where you will dip your finger into the frosting, just to get a little taste, but your pudgy fingers leave a noticeable divot; and you attempt to fill in the gap with frosting from the edges, but it only ends up looking worse.
In sum, if you are using SPM to estimate smoothness for second-level contrasts or ANOVAs, do not simply use spm_est_smoothness on the ResMS.hdr images output from that analysis. I have tried and I have failed to get any meaningful results, and I have also been unable to find an answer on the message boards or listservs. Therefore, instead of focusing on the second-level analyses, I recommend averaging the smoothness estimates across each subject, and using those averages for your cluster correction estimates. This can be done by looping over each subject, applying spm_est_smoothness to each individual's ResMS.hdr file, and storing the results in a matrix, after which the results are averaged in the x-, y-, and z-directions. The following .m file allows you to do this.
I confess that I know little else about smoothing; this was the only reasonable approach that I could find, and I have little intuition about why spm_est_smoothness gives such wonky results on second-level residuals datasets. It may be that these are to remain mysteries, too profound for our common clay.
Thanks to Jillian Hardee, who never smoothed a dataset she didn't like.
Is this Jillian Hardee as in me? I should vainly Google myself more often, shouldn't I? I feel like a blog star. :)
ReplyDeleteThumbs up for your use of the word "wonky".
Of course it's you! And yes, you should Google yourself more often, especially if it leads you to this site.
DeleteThis comment has been removed by the author.
ReplyDelete