Saturday, July 28, 2012

FSL Tutorials 4-5: Group-Level Analysis

It has well been said that analyzing an fMRI dataset is like using a roll of toilet paper; the closer you get to the end, the faster it goes. Now that you know how to analyze a single run, applying this concept to the rest of the dataset is straightforward; simply apply the same steps to each run, and then use the "Higher-Level Analysis" option within FEAT to select your output directories. You might want to label them for ease of reference, with the run number appended to each directory (e.g., output_run01, output_run02, etc).

Also uploaded is a walkthrough for how to locate and look at your results. The main directory of interest is the stats folder, which contains z-maps for each contrast; simply open up fslview and underlay an anatomical image (or a standard template, such as the MNI 152 brain, if it is a higher-level analysis that has been normalized), and then overlay a z-map to visualize your results. The sliders at the top of fslview allow you to set the threshold for the lower and upper bounds of your z-scores, so that, for example, you only see z-scores with a value of 3.0 or greater.

After that, the same logic applies to collapsing parameter estimates across subjects, except that in this case, instead of feeding in single-run FEAT directories into your analysis, you use the GFEAT directories output from collapsing across runs for a single subject. With the use of shell scripting to automate your FEAT analyses, as we will discuss in the next tutorial, you can carry out any analysis quickly and uniformly; not only is scripting an excellent way to reduce the amount of drudge work, but it also ensures that human error is out of the equation once you hit the go button.

Make sure to stay tuned for how to use this amazing feature, therewith achieving the coveted title of Nerd Baller and Creator of the Universe.





31 comments:

  1. Where is the example data that goes along with these screen casts?

    ReplyDelete
    Replies
    1. Hey Nathan,

      I apologize for not spotting this comment sooner, but for these screen casts I use data at the following link: http://mypage.iu.edu/~ajahn/docs/FSL_ExampleData.zip

      Best,

      -Andy

      Delete
    2. Hi Nathan,

      I apologize for not spotting this comment sooner! In any case, the example data for these screen casts is at pages.iu.edu/~ajahn, under the FSL Example Data link.

      -Andy

      Delete
  2. which system du u use ?? please let me know as os if u have tutorials thanks

    ReplyDelete
    Replies
    1. I use OS X 10.6.8; probably getting a little out of date, but it works fine for both FSL and AFNI.

      -Andy

      Delete
  3. Hello and thank you for all these tutorials! I have a weird group-level issue that is 100% user fault. For kicking-myself-now reasons, I ran two of my 21 participants' subject-level analyses in a different order from the others, and set up the contrasts in a different order, so that a given condition is cope 3 for most of the subjects, but cope 1 for these two. A typical higher-level analysis allows me to check any cope number for all the lower-level files input, but not to specify different copes for different inputs within one analysis. Did I completely screw myself over by deviating from my naming convention, or can FEAT handle this somehow?

    Thank you again for this blog!

    ReplyDelete
    Replies
    1. Hey there,

      One possible workaround could be to simply rename the copes in the directories so that they all line up with one another. Of course, you would have to be careful to keep track of which was which, but that's what immediately comes to mind. Besides that, I'm not sure if FEAT can do what you're asking, but maybe they have a patch for it.

      Best,
      -Andy

      Delete
  4. Hi Andy!

    Thanks for your clear and useful tutorials.

    I'm having a hard time figuring out how to perform single-subject, multi-run analysis on my experiment data.

    I have four experimental conditions in my experiment (A, B, C and D). Subjects undergo four runs, each consisting of 15 trials of one condition (A and the first run, B in the second, etc.), and two oddball trials.
    In the first level analysis, I analyzed each run with two EVs - one for the oddball and the second for the relevant condition (A, B, C and D). I now try to compare the conditions (A-B, B-C, etc.) but can't find a way to do so. Seems like group level analysis is only relevant to cases in which all runs contain the same EVs. What am I missing?

    Thanks,
    Matan

    ReplyDelete
    Replies
    1. Hi Matan,

      After your first-level analysis, is it outputting copes for each condition? If so, you may be able to just use fslmaths to take the difference between the conditions you're interested in, then submit that contrast to a higher-level analysis.

      -Andy

      Delete
  5. This comment has been removed by the author.

    ReplyDelete
  6. Thank you for your incredibly useful and straightforward tutorials! They're very helpful.

    Quick comment. This will depend on your purpose, but for repeated scans within the same individual (i.e., multiple runs of a task), it might be worth considering the Fixed Effects model. I don't know whether you get to that later, but it does seem potentially more powerful. Across subjects, of course, you'd want to use mixed effects.

    ReplyDelete
    Replies
    1. Hey Matthew,

      Thanks for the feedback! It is true that if you are analyzing one subject, then a fixed effects model is appropriate; you are covering the entire condition space that you are interested in, so there is no need to generalize to other conditions. However, if you recruit additional subjects, then these should be treated as a random factor with a differently modeled error term, since you are attempting to generalize their effects to all subjects.

      -Andy

      Delete
  7. Hi Andrew,

    I had a question regarding group analysis. I collected three runs of data for my subjects but on analyzing the data, for one subject one run was incorrectly collected. My question is if I can include this subject during group level analysis where I only take two of his runs, while for others I take three each ? Would that be incorrect ?

    ReplyDelete
    Replies
    1. Hi there,

      Performing a group-level analysis with different numbers of runs is acceptable; you are still getting an average estimate of the parameters for that particular subject, with perhaps less accuracy, but the numbers are still valid.


      Best,

      -Andy

      Delete
    2. Thank you so much for replying. That helps a lot. I have another question. In SPM you realign, coregister and then normalize. From my limited understanding, in FSL FLIRT is like coregister where we do functional to anatomical registration and FNIRT should be used for aligning functional to standard space (using the anatomical). Is this a correct way for understanding ? Also, does this mean that motion correction should be done separately (or is it done through both of these) ?

      Thanks,

      Delete
    3. Hi there,

      I don't use FSL that much, but I believe that the typical FSL pipeline does the same series of steps: Motion correction (via MCFLIRT), followed by using FLIRT to register the functional images to the anatomical, and then warp the anatomical to a standard space and apply those warps to the functional images. FNIRT is the same idea, just carried out slightly differently, allowing for nonlinear warps and transforms.

      Best,

      -Andy

      Delete
  8. Hi Andy,

    Thanks for the wonderful tutorials. They've been of great help throughout the years.

    Being a very novice student of the fMRI school, I have what is probably a very simply group level analysis problem that has still managed to stump me. I have a patient and a control group with 4 subjects each, all undergoing 6 runs. What I'm interested in doing is contrasting late vs. early runs (last 2 runs vs. first 2 runs), and then of course looking at group differences. I understand that this is pretty much a repeated measures ANOVA, but I'm having trouble entering this kind of a model into FEAT. Do I concatenate the early runs and the late runs and then subtract using fslmaths once for the late vs early contrast, and one more time for the patient vs. control contrast? Or is there a way to enter this simple RM ANOVA model into FEAT?

    Thanks!
    Ahmet

    ReplyDelete
    Replies
    1. Hey Ahmet,

      Are you referring to the parameter estimates for the early runs and the late runs? In that case, you can do what you proposed, averaging the PE maps for early runs and late runs and then subtracting them from each other (the SPM developers refer to this as a summary statistic approach). If you assume that the variance between the groups is roughly equal, then you could do a contrast between the groups and follow up with ROI stats on those maps.


      Best,

      -Andy

      Delete
    2. Hi Andy,

      Yes, that's what I'm referring to. Thanks so much for your help! I got it to work that way and the report log seems to make sense, so all seems to be well.

      Speaking of ROIs, I've been reading around about Featquery (your blog, a tutorial by Jeanette Mumford, various forums), and it seems like plugging in first level feat directories into Featquery is a bad bad idea. Is this inherently an fsl issue, or is it generally a bad idea to look at ROI estimates at an individual subject, individual run level?

      Thanks again!
      Ahmet

      Delete
    3. I forget if I did it at the individual run level (I should check on that), but you should run featquery at the subject level, which is an average over all the runs within that subject. FSL has an odd convention where 1st level refers to individual runs and 2nd level refers to the subject, whereas the other major packages (AFNI, SPM) refer to subject results as 1st level and group results as 2nd level.

      -Andy

      Delete
  9. Hi Andy,

    Your tutorials have been of huge help as this area is very new to me!

    I have a question, I have run dual regression (with unpaired t-test design) between (2) groups differences, and I have used group ICAs derived from the combined group ICAs of my sample (drug exposed, and non-drug exposed). I have got corrected 1-P values larger than 0.95, but when I look at the results in fslview it seems that all of the voxels that are significantly different between groups are not laying within the group networks (for example, if I am overlaying corrected IC 001 over IC-1, the significantly different voxels are not within the "highlighted" group network, but in another region of the brain that does not seem to be part of the functional network). What does this mean? Should I try and use just the non-exposed group ICAs and feed this to the dual regression?
    Hope I am making sense.

    Many thanks,
    Norma

    ReplyDelete
  10. Hi Andy,

    I really can not thank you enough for those wonderful tutorials on fMRI! Everyone was telling me that it will take ages to figure out fMRI and your videos have really made a difference and got me started! I could not have done it without you!

    Since you are an expert in the field I just wanted to ask you something... Basically I am using right now the Human Connectome Project task fMRI dataset and I wanted to try the method described here: http://www.nature.com/articles/srep20170#f2 but for now I felt like FSL was easier for me and I have no idea if something like that would be possible to run in FSL. I do not even understand how they ran this in SPM as well so I have nowhere to start to try it out.

    Ideally I am trying to form variability maps of the hand motor area for those HCP healthy subjects and even though I am not an expert in fMRI I thought this was a robust way to do that. 1) What do you think about that? 2) Do you think this is possible using FSL and could you guide me through if your time allows it? I would be happy to try that in SPM too or anything needed but I know nothing about SPM so far since it seemed more complicated for me to start with that so perhaps it would be harder to communicate the proper way of running the analysis to me. Any guidance would be appreciated. Thanks a lot!

    In any case thank you so much for all your help and for taking all the time to teach us how to perform neuroimage analysis but especially for making everything look so easy!

    Thanks!

    Best regards,
    Magda Tsintou.

    ReplyDelete
  11. Having collapsed across runs (12 runs, one subject), and with a simple block design, I have 4 contrasts (A,B,A-B,B-A). I am seeing a lot more activation for the subtraction contrasts (e.g. A-B) relative to the non-subtractd contrasts (e.g. A). Conteptually I am not sure how this can be. Surely the amount of activation for contrast A alone has to be more than the amount of activation for contrast A minus contrast B? The difference is sizable too. I am getting this result for other subjects too, am I doing something wrong? Or is there a basic explanation here?

    Thanks for the great resources Andy, very very helpful. I have learned a lot from your website and Youtube channel.

    ReplyDelete
    Replies
    1. Hey there,

      It could be that B is negative, which is why A-B is larger than A. For example,

      A = 0.5
      B = -0.2
      A - B = (0.5 - (-0.2)) = 0.7

      Check this out by selecting a voxel, writing down the value for map A and the value for map B, subtract B from A, and then check out the contrast map A-B.


      Best,

      -Andy

      Delete
    2. Hi Andy,

      To check, I chose the strongest activating voxel in the subtraction contrast, and the intensity values for A,B,A-B did not add up at all.

      I overlaid each thresh_zstat image over my example_func from the reg folder and checked voxel (42,49,1).

      example_func intensity: 692
      A intensity: 0
      B intensity: 0
      A-B intensity: 5.74

      I have checked my ev's etc, and all seem fine. I can't think why this would occur given the relatively simple analysis. Any ideas?

      Thanks

      Delete
    3. Hey David,

      You're looking at the z-maps, which can be different from the contrast maps, depending on the variance for that contrast. For example, if there was lower variability in estimating one of the regressors, that regressor will have a higher z-score.

      Check out the cope images (or pe images, however you generated them for each individual regressor), and see if it holds up.

      -Andy

      Delete
    4. That made more sense.

      Looking at group activation results (gfeat) I am unsure which files to be looking at. Do you have any resources on this? Should I be looking at the rendered threth ztats?

      I have x,y,z coordinates for largest intensity voxel, but I can't seem to use '-' in the cursor tools part of the gui. Any ideas why this is?

      Cheers,
      David

      Delete
  12. Hi Andy,

    Thank you for your tutorials !

    Because I am still new to this area, I face a problem about fMRI thresholding that all my friends can't answer. How can I use fsl FLAME1 to do ttest from the results of SPM5, which had already done with preprocessing and 1st level analysis in SPM5? I had looked over all the websites, but still can't find a good method to do so. Thank you if you are willing to answer my question.

    Thanks,
    Jim

    ReplyDelete
    Replies
    1. Hi Jim,

      I haven't done something like that before, but you could try this:

      1. Open up FEAT and select "Higher-level analysis" from the dropdown menu at the top of the screen;

      2. In the Data tab, select "Inputs are 3D cope images from FEAT directories"

      3. Select your individual contrast images from SPM;

      4. In the Stats tab, selected "Mixed effects: FLAME 1"

      -Andy

      Delete
  13. Hi Andy. Do you have tutorial on how to do FNIRT?

    ReplyDelete
    Replies
    1. No, I don't. If you're interested in doing FNIRT then the easiest way, as I understand it, is to simply check the "nonlinear" box in the Registration tab during FEAT.

      -Andy

      Delete