Friday, March 7, 2014

Resting State Analysis, Parts V and VI: Creating Correlation Maps and Z-Maps

Note: Refer to Example #9 of afni_proc.py for AFNI's most recent version of resting-state analysis.


Now that we've laid down some of the theory behind resting-state analyses, and have seen that it is nothing more than a glorified functional connectivity analysis, which in turn is nothing more than a glorified bivariate correlation, which in turn is something that I just made up, the time has now come to create the correlation maps and z-maps which we have so craved. I believe I have talked about correlation maps and their subsequent transmogrification into z-maps, but in the interest of redundancy* and also in the interest of showcasing a few ties that I picked up at Goodwill, I've created two more videos to show each of the steps in turn.

First, use 3dmaskave to extract the timecourse information from your ROI placed in the vmPFC:

3dmaskave -quiet -mask vmPFC+tlrc errts.{$subj}+tlrc > timeCourse.txt

This information is then used by 3dfim+ to generate a correlation map:

3dfim+ -input errts.{$subj}+tlrc -polort 0 -ideal_file timeCourse.txt -out Correlation -bucket vmPFC_Corr



Once those correlation maps are generated, use 3dcalc to convert them into z-maps:

3dcalc -a vmPFC_Corr+tlrc -expr 'log((1+a)/(1-a))/2' -prefix Corr_subj{$subj}_Z





N.B. In each of the above examples, {$subj} is a placeholder for the subject ID you are currently processing; with a few tweaks, you should be able to put this all into a script that automates these processes for each subject.

N.N.B. (I think that's how you do it): The original script that I uploaded had a couple of bugs; one of the placeholders should have been changed to a generic $subj variable, and also -giant_move option has been added to the align_epi_anat.py part of the script, since the anatomical and functional images actually start out quite far away from each other. If you haven't used it yet, downloading the new script should take care of those issues. Also, another hidden change I made was to increase the motion limit from 0.2 to 0.3mm; too many subjects were getting thrown out, and even though a more rigorous analysis would leave the motion threshold at a more conservative 0.2, I've raised it for now, for pedagogical purposes.

N.N.N.B. Find out what "N.B." means.


*Sponsored by the United States Department of Redundancy Department

11 comments:

  1. all of your postings are very useful, so thank you for them! Will you soon be posting about the same resting state analysis using FSL?

    ReplyDelete
    Replies
    1. Thanks, I'm glad they're useful! And yes, I am planning on doing a similar series of tutorials for FSL; I can't say when, exactly, but probably within the next month.

      -Andy

      Delete
  2. Thanks! I look forward to checking back soon!

    ReplyDelete
  3. Hi Andy,
    I have managed to get through everything - thank you so much for taking the time to make and post these tutorials. I have a question. When I open up my "Corr_subj{$subj}_Z" image in afni, the region that the seed was placed should correlate highly (all the way, you might say) with itself. But, when I open the image there is no correlation around my seed, which has me worried I did something wrong. Sorry, I am not very good with scripting, is it that the script is "muting" these region so the correlation doesn't show? Or, in fact, is something going wrong?
    Thanks!
    Meghan

    ReplyDelete
    Replies
    1. Hi Megan,

      How big is the size of your seed region? If it is very large (say, a sphere with greater than a 10mm radius), the timecourses that are being averaged together may be so different that the average timecourse doesn't correlate well with any of them.

      You can check this by creating a seed region that is a small sphere, or even just a single voxel; that should show stronger correlations radiating around that seed, along with any other voxels in the brain that happen to correlate with it. If that still doesn't look right, let me know.

      Best,

      -Andy

      Delete
  4. Hi Andy,

    Thank you for many useful posts. Why we need to run 3dfim+ with -polort 2? Since errts.${subj}+tlrc would be detrended signals, should not we detrend again?

    ReplyDelete
    Replies
    1. Hey there,

      That's a good point; I was following the steps on Gang Chen's page, but you're right in saying that the data is detrended twice. I'll write to Gang and see what he has to say. Also see this AFNI message board thread about different ways to detrend and calculate the same correlation coefficient: https://afni.nimh.nih.gov/afni/community/board/read.php?1,42274,42276#msg-42276

      Best,

      -Andy

      Delete
  5. Hi Andy,

    I was wondering if I could ask you a question about 3dfim+. Don't you need to specify -fim_thr 0? 3dfim+ automatically exclude low intensity voxels. Therefore, correlations to low intensity voxels are not calculated and, as a result, an output image may look like cracked.

    ReplyDelete
    Replies
    1. Hey there,

      You're absolutely right; I hadn't used that option before, but I can see how you may want to use it if you want to know all of the correlations at each voxel. Thanks for pointing that out!

      -Andy

      Delete
    2. Thanks Andy,

      So, I think, we can obtain correlation values to the whole of the brain as follows:

      3dfim+ -input errts.${subj}+tlrc -mask mask_group+tlrc -fim_thr 0 -polort 0 -ideal_file timeCourse.txt -out Correlation -bucket vmPFC_Corr

      -fim_thr option works tricky. Even if we specify a mask, 3dfim+ automatically generates another mask to exclude low intensity voxels.

      Delete
  6. I believe this will give me the same results?

    3dTcorr1D -Fisher sub001_3dtimedata+tlrc. sub001_timeCourse.txt

    Correct me if I am wrong. Thanks.

    ReplyDelete