Now that we've laid down some of the theory behind resting-state analyses, and have seen that it is nothing more than a glorified functional connectivity analysis, which in turn is nothing more than a glorified bivariate correlation, which in turn is something that I just made up, the time has now come to create the correlation maps and z-maps which we have so craved. I believe I have talked about correlation maps and their subsequent transmogrification into z-maps, but in the interest of redundancy* and also in the interest of showcasing a few ties that I picked up at Goodwill, I've created two more videos to show each of the steps in turn.
First, use 3dmaskave to extract the timecourse information from your ROI placed in the vmPFC:
3dmaskave -quiet -mask vmPFC+tlrc errts.{$subj}+tlrc > timeCourse.txt
This information is then used by 3dfim+ to generate a correlation map:
3dfim+ -input errts.{$subj}+tlrc -polort 0 -ideal_file timeCourse.txt -out Correlation -bucket vmPFC_Corr
Once those correlation maps are generated, use 3dcalc to convert them into z-maps:
3dcalc -a vmPFC_Corr+tlrc -expr 'log((1+a)/(1-a))/2' -prefix Corr_subj{$subj}_Z
N.B. In each of the above examples, {$subj} is a placeholder for the subject ID you are currently processing; with a few tweaks, you should be able to put this all into a script that automates these processes for each subject.
N.N.B. (I think that's how you do it): The original script that I uploaded had a couple of bugs; one of the placeholders should have been changed to a generic $subj variable, and also -giant_move option has been added to the align_epi_anat.py part of the script, since the anatomical and functional images actually start out quite far away from each other. If you haven't used it yet, downloading the new script should take care of those issues. Also, another hidden change I made was to increase the motion limit from 0.2 to 0.3mm; too many subjects were getting thrown out, and even though a more rigorous analysis would leave the motion threshold at a more conservative 0.2, I've raised it for now, for pedagogical purposes.
N.N.N.B. Find out what "N.B." means.
*Sponsored by the United States Department of Redundancy Department
all of your postings are very useful, so thank you for them! Will you soon be posting about the same resting state analysis using FSL?
ReplyDeleteThanks, I'm glad they're useful! And yes, I am planning on doing a similar series of tutorials for FSL; I can't say when, exactly, but probably within the next month.
Delete-Andy
Thanks! I look forward to checking back soon!
ReplyDeleteHi Andy,
ReplyDeleteI have managed to get through everything - thank you so much for taking the time to make and post these tutorials. I have a question. When I open up my "Corr_subj{$subj}_Z" image in afni, the region that the seed was placed should correlate highly (all the way, you might say) with itself. But, when I open the image there is no correlation around my seed, which has me worried I did something wrong. Sorry, I am not very good with scripting, is it that the script is "muting" these region so the correlation doesn't show? Or, in fact, is something going wrong?
Thanks!
Meghan
Hi Megan,
DeleteHow big is the size of your seed region? If it is very large (say, a sphere with greater than a 10mm radius), the timecourses that are being averaged together may be so different that the average timecourse doesn't correlate well with any of them.
You can check this by creating a seed region that is a small sphere, or even just a single voxel; that should show stronger correlations radiating around that seed, along with any other voxels in the brain that happen to correlate with it. If that still doesn't look right, let me know.
Best,
-Andy
Hi Andy,
ReplyDeleteThank you for many useful posts. Why we need to run 3dfim+ with -polort 2? Since errts.${subj}+tlrc would be detrended signals, should not we detrend again?
Hey there,
DeleteThat's a good point; I was following the steps on Gang Chen's page, but you're right in saying that the data is detrended twice. I'll write to Gang and see what he has to say. Also see this AFNI message board thread about different ways to detrend and calculate the same correlation coefficient: https://afni.nimh.nih.gov/afni/community/board/read.php?1,42274,42276#msg-42276
Best,
-Andy
Hi Andy,
ReplyDeleteI was wondering if I could ask you a question about 3dfim+. Don't you need to specify -fim_thr 0? 3dfim+ automatically exclude low intensity voxels. Therefore, correlations to low intensity voxels are not calculated and, as a result, an output image may look like cracked.
Hey there,
DeleteYou're absolutely right; I hadn't used that option before, but I can see how you may want to use it if you want to know all of the correlations at each voxel. Thanks for pointing that out!
-Andy
Thanks Andy,
DeleteSo, I think, we can obtain correlation values to the whole of the brain as follows:
3dfim+ -input errts.${subj}+tlrc -mask mask_group+tlrc -fim_thr 0 -polort 0 -ideal_file timeCourse.txt -out Correlation -bucket vmPFC_Corr
-fim_thr option works tricky. Even if we specify a mask, 3dfim+ automatically generates another mask to exclude low intensity voxels.
I believe this will give me the same results?
ReplyDelete3dTcorr1D -Fisher sub001_3dtimedata+tlrc. sub001_timeCourse.txt
Correct me if I am wrong. Thanks.