Monday, June 23, 2014

Multiple Regression in SPM

A couple of videos have been posted about multiple regression in SPM, both at the first level and second level. Technically, almost all of the GLMs researchers usually set up use multiple linear regression, where a linear combination of weighted regressors is fit to the timecourse of each voxel. However, in SPM parlance, regressors are distinct from conditions: Conditions are typically convolved with some sort of basis function (usually a hemodynamic response function), whereas regressors are not convolved with anything. For example, someone may enter motion parameters as regressors, since they are not convolved with any hemodynamic shape, but still may account for some of the variance observed in the signal (and, we hope, the signal that is primarily due to motion). Another example would be entering in the timecourse extracted from an ROI to do a functional connectivity analysis, or inserting the interaction term from a PPI analysis as a regressor.

At the second level SPM also gives the option for doing multiple regression; but this is simply the entering of covariates which can be tested for correlations with the signal change in contrasts across subjects. In most cases, however, you can select any other design, and enter in covariates regardless of which design that you choose; which I believe to be the better option, since it is more flexible.

In any case, most of the rest of the details can be found in the following videos. Speak hands for me.





77 comments:

  1. Hello,

    i was wondering when you entered the scans for each subject how did you obtain this. if i have functional connectivity zmaps for all subjects for 2 ROI how can i find the contrast. i am stuck or missing something out.

    ReplyDelete
    Replies
    1. Hey there,

      If you have two separate functional connectivity z-maps, one for each ROI, then you should be able to take the contrast between the z-maps for each ROI for each subject, and enter these differences into a second-level one-sample t-test.

      Best,

      -Andy

      Delete
  2. Hello, if i have done a mutiple regression analysis using spm and the results showed that my contrast is negatively correlated to the covariates i entered and there are some regions that are lighting in orange in the brain images. Can you please exaplain what this means in term of functional connectivity. does this mean those regions have reduced functional connectivity ? THank you

    ReplyDelete
    Replies
    1. That depends; when you say you ran a multiple regression analysis, are your covariates auxiliary information - things like age, state variables like anxiety, and other continuous variables? If so, then any results from a negative contrast simply mean that the activity in those clusters you see are negatively correlated with your covariates. You can't really infer much about functional connectivity from those results.

      If, however, you are doing a contrast on a timecourse, and you are looking at results from a negatively weighted contrast, then the interpretation becomes a little trickier; all you can really say is that any significant clusters have a negative timeseries correlation with your seed region. Any inferences about, say, one region inhibiting another, needs to be backed up by other results.

      Keep in mind that the term "functional connectivity" is somewhat misleading; there is little you can say about connectivity just from looking at correlations of timecourses alone.


      Best,

      -Andy

      Delete
  3. Thanks alot for your reply. My covariates are psychosis scores such as anxiety and visual scores. And the result showed that the thalamic connectivity contrast in ketamine drug users showed negative correlation (significant) in relation to the psychotic scores. i just dont quiet understand the brain regions which are bright in orange highlighted stand for? is it increased/decresed activation or connectivity of the brain regions... Thank You

    ReplyDelete
    Replies
    1. Hi there,

      Just to clarify, what exactly is the thalamic connectivity contrast? Is it a seed region placed in the thalamus that takes the average timecourse for that region, and then correlates with the rest of the brain?

      In that case, the most you could say about any significant clusters is that they show a negative correlation between the psychosis scores and those voxels that have a positive timeseries correlation with your seed region. It would be difficult to make any other inferences, and it doesn't speak to increased or decreased connectivity.

      In any case, I would need more information about the experiment, and particularly about what you are trying to find with this analysis, before I can give any further advice.

      -Andy

      Delete
  4. well. the study is about chronic ketamine and controls. Data was processed for all subjects using spm and dartel template. then i used the rest toolbox “Voxel wise” button for calculating the linear correlation between a reference time course (from an ROI) and the time course of each voxel within the predefined mask, which generated the FC map. . The individual z-maps created for seed region of left and right thalamus, and were used for second-level group analysis in SPM, mutiple regression test.

    the multiple regression analysis showed that the thalamic connectivity was negatively correlated with psychosis scores in ketamine users.

    If my hypthesis is the following : we hypothesis that the f.connectivity in chronic ketamine users will demonstrate reduced connectivity between the thalamus and the anterior cingulate cortex, and that the magnitude of this disruption will be significantly correlated with psychotic symptoms

    Does the result support the hypothesis


    And thanks for the feedback!

    ReplyDelete
    Replies
    1. Hi,

      Apologies for getting back to you so late! Yes, I would say that those results would support your hypothesis, but I would be careful when making any interpretations about the connectivity. What you stated in your last paragraph was correct, but this does necessarily speak to any causal effects of psychosis on connectivity (although it is a good first step).

      The clusters that are significant for your multiple regression analysis are just regions that show a significant correlation between the psychosis scores and the r-values output by the functional connectivity analysis. This correlation could be due to a number of things; it would take a more stringent experimental design to attribute any changes in functional connectivity as caused by psychosis, such as psychophysiological interactions or DCM.

      Best,

      -Andy

      Delete
  5. furthermore the multiple regresion test showed the relationship between thalamic conectivity and scores showed significant correlation ( negaitively correlated ) , and there were regions highlighted (activated)in the brain such as the limbic lobe, middle temporal gyrus. I am just confused this regions that are activated means there is a distrubtion in connectivity (reduced) or the results doesnt speak about connectivity at all

    ReplyDelete
  6. Hello Andy,

    Forgive me for asking a question again - I am setting up a multiple regression in SPM and just can't seem to get my head around how to set the model up correctly.

    I am interested in how the mean intensity score of the scans correlates with one covariate A, but I want to set up my design in such a way to take into consideration/ regress out covariate B and covariate C.
    I understand that I would enter a weight of one for covariate A but what about covariate B and C - if I leave them as zero will they just be ignored? Should I enter in a negative weighting?

    Many thanks,

    Fiona

    ReplyDelete
    Replies
    1. Hi Fiona,

      If you just enter a weight of 1 for covariate A, you will test just for the effect of A, controlling for everything else. Entering weights of 1 for A and -1 for B, for example, leaves you with a contrast between the two; this would test for any significant difference in the amount of variance explained by the covariates. Generally that is discouraged, unless you have a compelling reason for taking the difference between them, and can explain that difference as the result of changing one dimension.


      Hope this helps!

      -Andy

      Delete
  7. Hi Andrew,

    I wonder if you know how to find r-values from multiple regression analyses in SPM? I have gotten as far as being able to plot the adjusted response at my chosen voxel against my explanatory variable (scores on a questionnaire) using the SPM GUI.

    Also, if there isn't a way to do this in SPM, would you happen to know if there is a way to extract the x and y values from the plot to run the analysis in SPSS, for example? I know how to extract the y values from the workspace, but don't know about the x values.

    Thank you,
    From an SPM novice!

    ReplyDelete
    Replies
    1. Hello,

      Unfortunately, I don't know of a way to extract r-values from multiple regression in SPM; that statistic doesn't appear to be stored anywhere, and I'm unsure of how to get at it.

      When you talk about x-values, are you referring to a first-level or second-level analysis? If it is a second-level analysis, then those values should be equivalent to the scans that went into your design, which is usually one per subject.

      -Andy

      Delete
    2. Hi,

      Thank you for your reply, I managed to find the relevant values in the matlab workspace and used them in a correlational analysis in SPSS to get r-values that way. :)

      Best wishes,

      Delete
    3. Hello there,

      i have the same question as you. I did manage to correlate certain contrsts with questionnaire scores and i get some significant results. In some publications I see scatterplots showing the questionnaire scores and some other values somehow extracted from SPM. I do wonder where to get those values and wonder if they are eigenvariates from the SPM GUI (my nearest guess). Since you fugured out the problem would you be so kind and share where and how you got those values in order to get the r-values?

      kind regards

      Delete
  8. Hello Andy,
    Thank you for your tutorials, there is very helpful.
    I have a question regarding con0001 files used here for each subject. I didn't understood how did you obtained it from first level analysis.
    in other hand if you include motions as multiple regressor during the first level analysis the beta images resulting from this analyse are motion-cleaned ?

    Thank you

    David

    ReplyDelete
    Replies
    1. Hey David,

      The con files refer to contrasts that were built at the first level; here, they can be a surrogate for any contrasts that you created.

      As for your second question, yes, if you include motion regressors as "raw" regressors in your first-level analysis, they will tend to soak up any variance associated with motion.

      -Andy

      Delete
  9. HI Andy

    I do regression analysis on my data using spm. I have 2 regressors in my analysis. SPM generates T score maps after the analysis for each regressor at each voxel for the first level analysis. Is there any procedure to convert this T score maps to Z-score maps or any other way to directly generate Z score maps for regession analysis using spm. My goal is to generate Z scores maps.

    Thank you,
    From an SPM beginer

    ReplyDelete
    Replies
    1. What's up player,

      I'll be creating a tutorial about just that issue in a couple of days, so stay tuned; I just have to get all the code together, and make sure it works before posting it.

      -Andy

      Delete
    2. thank you for the reply. I found out that we can use spm_t2z to convert t score to z score. However, for a T score greater than 6, the program extrapolates to generates Z score. this might underestimate the Z score. Do you know any script which could estimate the Z score accurately?

      Delete
  10. Hi Andy,
    Congratulations on the new job! I have been watching your videos for several years now and look forward to future tutorials. We are in the midst of a discussion in which we are trying to determine when it's best to use the multiple regression module verses a one-sample t-test design (as you point out in your video) - We have been using the one-sample t-test but we may not be using correctly. Specifically, how to code the group (constant) variable, predictor variable, and covariates (questionnaire data etc). For example, our predictor variable is age and we have a few variables we would like to control depression, anxiety using the one-sample t-test for simple regression we think coded the contrast 1 1 0 0 to examine where increasing age is associated with brain activation after controlling for depression and anxiety AND -1 1 0 0 to examine were increasing age is associated with deactivation after controlling for depression and anxiety?
    Thanks for all you do!

    ReplyDelete
    Replies
    1. Hey, thank you! It's been a busy couple of weeks, but I am enjoying my new position here very much.

      As for your question: Do the columns you're talking about represent age, group, depression, and anxiety, from left to right? I wouldn't include group as a covariate; if you're examining for effects depending on which group someone is assigned to, a flexible factorial design might be more appropriate.

      However, if you are just testing for age and how it correlates with brain activity, then a contrast like 1 0 0 0 and -1 0 0 0 would be appropriate, and the interpretations you spelled out above would be valid.

      -Andy

      Delete
  11. Hello Andy,
    Thank you for your tutorials.

    I'm a beginner for SPM. I learned that uncorrected p-value 0.001 is used when I analyze brain images using SPM analysis.
    I wonder why uncorrected p-value 0.05 was used in 2nd level covariates tutorial. What is a correct p-value; FWE p-value 0.05, uncorrected p-value 0.05 or 0.001?


    was wondering
    when you entered the scans for each subject how did you obtain this. if i have functional connectivity zmaps for all subjects for 2 ROI how can i find the contrast. i am stuck or missing something out.

    ReplyDelete
    Replies
    1. Oops, I have a mistake for handling this. Sorry. Please don't mind last paragraph.

      Delete
  12. Hi

    Your videos are really useful.

    I am undertaking a study looking at movement of a patient group who suffer from shoulder dislocation.

    They undertake two movements in the scanner, forward flexion and abduction. I have a group of controls undertaking the same movements. Outside the scanner they undertake questionnaires which report a shoulder instability score. High score is normal and a low score is shows instability.

    I have the following set up (patient forward flexion, control forward flexion, patient abduction, control abduction, patient rest, control rest, instability score). How do I ensure that the score variability is taken into account when looking at the difference in movement between the two groups (i.e. 1 0 -1 0 0 0 1) or is the covariance taken into account in any event.

    Thanks
    Anthony

    ReplyDelete
    Replies
    1. Hey Anthony,

      If you input a set of covariates along with your 2nd-level model, then those should be accounted for.

      Also, I would doublecheck your contrast vector to set the positive values to 1 (e.g., 0.5 0 -1 0 0 0 0.5), to avoid any scaling issues when looking at the results later.


      Best,

      -Andy

      Delete
  13. hi Andrew,

    Thanks a lot for your videos. They are truly useful.
    I have a question with respect to multiple regression analysis on SPM. Do you usually test the assumption of non-existence of outliers in your analyses? If so, how do you check whether a particular subject is in a fact an outlier? Do you use the residual images?

    Thanks a lot
    Pedro

    ReplyDelete
    Replies
    1. Hey Pedro,

      Usually I test that when extracting parameter estimates or whatever else I'm doing for a t-test at the second level. If someone looks like they're more than, say, two or three standard deviations away from everyone else, then they usually warrant a second look.


      -Andy

      Delete
  14. Hi Andrew we have three predictors and we are trying to predict brain function during a face processing task. We have Diagnostic Group, Negative Emotionality and we would like to test them as well as their interaction DiagnosticGroup*Negative Emotionality. Could you guide us as what weights or contrasts values to enter in the contrast menu so that they are tested? In a full factorial we normally enter values that add to 0 and distribute the sign and size of the weights depending on which hypothesis we are trying to test following up a significant omnibus F Test but in the regression contrasts menu is trickier to do. Any advice?
    Karina Quevedo UofM

    ReplyDelete
    Replies
    1. Hi Karina,

      Is Diagnostic Group a categorical variable, or a continuous variable? Also, are there different levels for Diagnostic Group and Negative Emotionality?

      -Andy

      Delete
  15. Hi, Andy!

    Thank you very much for all you do.

    In my analysis, I have two groups (men and women) and I have already run the 2nd level in general, for all the subjects. However, now I would like to differenciate between men and women subjects and to know the areas meaningfully activated in each group and if there is difference between then.

    I am a little bit confused,

    How could I do it?

    THANK YOU VERY MUCH UN ADVANCE.

    Luis.

    ReplyDelete
    Replies
    1. Hey Luis,

      If you want to know the difference between them, you would have to run an unpaired two-samples t-test, and then assign men to group 1 and women to group 2. For individual group effects, try a simple t-test for group 1, then another for group 2. Give that a spin.


      Best,

      -Andy

      Delete
  16. Dear Andrew,

    I have a question concerning the use of regressors in SPM. My goal is to analyze a process which may describe brain activity in some regions. I enter the values of these process as regressors in the batch editor (with one value for each scan). As the regressors are not convolved with the HRF I shift the regressor about 4.5 seconds. Now, I also want to allow for small proband- and voxel-specific shifts in the delay. Applying the HRF this is done via time and dispersion derivatives. As my process regressor is not convolved with the HRF I cannot use this possibility. Do you know any method I could use to incorporate this kind of variability in probands and voxels into my process regressor design?

    Many thanks in advance!

    Benjamin

    ReplyDelete
    Replies
    1. Hi Benjamin,

      I am unaware of any methods to capture time and dispersion effects in a regressor that is not convolved with the HRF; you might try looking within the SPM source code to see if you can apply it to your regressor.

      I have one comment about your analysis: You mention that you are shifting the regressor 4.5 seconds; I'm assuming that this is because the process you are analyzing elicits a similar shape as the HRF, and that you are trying to capture the peak of the signal of the process. If that is the case, consider convolving the entire time series of your regressor with the HRF, which you can get from an individual subject's SPM.xBF.bf field. This will be a more accurate model of how your process is represented by the BOLD signal.


      Best,

      -Andy

      Delete
  17. Hi Andy,

    Your blog is excellent and very useful. I appreciate all the videos you've made!

    How do I set up an interaction between two continuous variables at the second level?

    I have a group of 100 participants, I've got con images for each (seed-based functional correlation maps) and I have two continuous variables.
    I want to do a regression analysis and examine the effect of variable X while controlling for Y, the effect of variable Y while controlling for X, and the interaction between the two.

    Cheers,
    Linden

    ReplyDelete
    Replies
    1. Hi Linden,

      Both X and Y are already in your model, so a test for each individual variable will already assume that the other variable is controlled for.

      As for the interaction, I don't know of a way to do that through the SPM menu. Consider this approach: Calculate parameter estimates for the continuous variables for each subject, and then extract those estimates from an ROI. You can then use a package like R to test for an interaction between those estimates.


      Best,

      -Andy

      Delete
  18. Hi. The blog is fantastic and the videos really helpfull!

    1. I have a question regarding contrast. I have just one group at baseline. And I want to do a regeression analysis and predict GM volume for two different continuous variables (X & Y), first independently and then in combination while controlling for 3 covariates, introduced in spm as follows: brains X Y cov1 cov2 cov3
    To examine the effect of variable X while controlling for Y and the other three covariates, the contrast should be 0 1 0 0 0 0
    To examine the effect of variable Y while controlling for X and the other three covariates, the contrast should be 0 0 1 0 0 0.
    And know to examine the effect of the combination of both variable x and y when controlling for the 3 covariates, does this contrast make sense = 0 1 1 0 0 0?
    2. And another question regarding the interaction, if a would like to test the interaction between sex and regressor X in relation to brain? Can I do it with a full factorial, including a factor (i.e sex) and two cells (levels 1 brains of girls and level 2 brain of boys) and then the regressors and covariates. And for the regressor X in which I am interested in test the interaction with sex mark “interaction: with factor 1”. And then create this contrat 0 1 0 (1 =boys) or 1 0 0 (1=girls).
    3. Last questions about ANCOVAS using a full factorial. Similar to the previous sample but controlling for 3 covariates and with one regressor X , to test differences between boys and girls, should I mark “ANCOVA: yes”. And then the contrat 1 -1 0 0 0 0 and -1 1 0 0 0 0?
    I am a bit confused because I am used to do analyses with SPSS, so for the interaction I would like to test like an interaction term between sex and regressor X.
    And for ancovas I would like to run an ANCOVA to test differences in gm volume between 2 groups (risk and non-risk group created using regressor X) while controlling for covariates.

    THANKS YOU VERY MUCH IN ADVANCE.

    Ire

    ReplyDelete
  19. Hello Andy,

    First time caller, long time listener...
    I would like to contrast Z-maps of seed-based resting state connectivity between two groups. I would also like to determine whether a covariate of interest [covar A] is positively correlated with any of the regions that emerged in the contrast - while controlling for covariate B.

    I 'think' this requires a two-sample t-test with covariates option in an SPM 2nd level analysis and the design matrix would be something like [1 -1 1 0] [group1, group2, covarA, covarB]. Is this correct? I was also contemplating a similar approach as the previous caller (Ire) using a full factorial so looking forward to your response.

    Many thanks.

    CereBro

    ReplyDelete
    Replies
    1. Hi CereBro (awesome name by the way),

      You're on the right track, but I would make one change to your model: split covarA into two regressors, one for group1, and one for group2. Let's call these new regressors covarA1 and covarA2. Only include the covariates for group1 in the regressor covarA1, and the covariates for group2 in covarA2. Then you can do a contrast of [0 0 1 -1 0] to test for differences in covarA between the groups.

      Setting up your model this way will control for differences in activity between the groups and for covarB; this allows you to directly compare how covarA differs between the groups, which I think is the question you're trying to answer. The contrast you listed, on the other hand, will test for a correlation between the covariate and the parameter estimates at each voxel, estimate a parameter for that correlation, and then subtract the difference in Z-scores between the groups - which isn't the same thing. You can test this by doing both models and seeing the difference.


      Hope this helps!

      -Andy

      Delete
    2. Thanks for the suggestion, Andy. I tried both ways and indeed the interaction effect worked when splitting the covariate of interest by group.

      Cheers!

      Delete
  20. Hi Andy,

    Thank you so much for the nice blog. Reading previous comments, I was wondering, how to interpret two regressors for predicting volumetric brain changes and controlling for two more covariates in one group sample, i.e contrast : 0 1 1 0 0.

    Thanks.
    Jo .

    ReplyDelete
    Replies
    1. Hi Jo,

      That contrast will give you the sum of regressors 2 and 3. For example, looking at a given voxel, if the average for regressor 2 was 1.5 and the average of regressor 3 was 2.5, then that contrast would give you a value of 4.0 at that voxel. By including the covariates in your model (but setting the them to zero in the contrast), you are controlling for any variance explained by the covariates.

      The way you set up your contrast depends on what question you're trying to answer. As I mentioned, your current contrast will give you the summed activity of regressors 2 and 3; if you wanted the average of those regressors, you would use the contrast vector [0 0.5 0.5 0 0].

      Hope this helps!

      -Andy

      Delete
  21. Thank you very much Andrew. Yes, this was really helpfull. Just wondering if those contrasts give me the sum or the average of regressors 2 & 3, should these regressors be expressed in the same unit?
    Are these contrasts usually used?

    Thanks.

    Jo.

    ReplyDelete
    Replies
    1. Hi Jo,

      Even though they are different regressors, I would label them with the same unit - "percent signal change" is usually accurate enough. And yes, I've seen those contrasts used, but I would look at both regressors individually to see what is driving the effect.


      Best,

      -Andy

      Delete
  22. Hi. I want to specify the design matrix for the memory task where a subject goes through a series of enocde epoch and probe epoch. Can you help me please how to specify these multi conditions in spm8 ?

    ReplyDelete
    Replies
    1. I haven't made a separate tutorial just for multiple conditions, but see the code at the end of this post for creating multiple conditions (there are a lot of dependencies, but it might help): http://andysbrainblog.blogspot.com/2014/06/running-spm-first-level-analyses-from.html

      Also see the help for the "Multiple conditions" option in the SPM GUI. You'll need some Matlab background to create the structures and cell arrays, but once you can do it, it saves a lot of time compared to typing in everything one condition at a time. Maybe one of these days I'll make a tutorial on it.

      -Andy

      Delete
  23. Hello Andy,

    Thank you very much for your video.

    I am now using spm to do the multiple regression analysis. And after the analysis using a covariate the result map showed the orange regions which include the region of my interests. However, when I extract the peak T value from this region of each subject and correlated these values with my covariate. I can't find any significant correlation. This really confused me. Could you please help me with this question?
    Thank you very much.

    ReplyDelete
    Replies
    1. And also I don't know how to handle the negative t values.

      Delete
    2. Hey Shirley,

      Are you extracting the beta values for each subject that went into the GLM? Try extracting them with the command spm_get_data, described here: http://andysbrainblog.blogspot.com/2014/07/quick-and-efficient-roi-analysis-using.html

      Then when you correlate those values, make sure they match up with the covariates you specified in the model.

      Also, note that extracting around the peak t-value is a biased analysis. This is fine if you just want to see what is going on in the peak region, but I wouldn't report it as a result.

      -Andy

      P.S. Negative t-values show where the correlation is negative, i.e. where the contrast values that went into your model correlate negatively with your covariates.

      Delete
    3. Thank you for your reply.
      One more question: what does the values in contrast map mean?
      T values? what of two different tasks?
      Thank you very much.

      Best regards
      Shirley

      Delete
    4. Hi Shirley,

      Imagine at a single voxel that you subtract one parameter estimate from another parameter estimate. For example, if at this voxel parameter A is estimated to be 1.5, and parameter B is estimated to be 1.0, then the contrast at that voxel will be (1.5-1.0) = 0.5.

      Now do that for every voxel in the brain, and you have a contrast map. That's what SPM does when you assign weights to regressors, such as +1 and -1, which then subtracts one from the other.

      Note that you can have a contrast image that isn't a contrast between different regressors. For example, you can calculate a simple effect by assigning a weight of +1 to a regressor, which will create a con_* image, but will only reflect the parameter estimate for that regressor.


      Best,

      -Andy

      Delete
  24. Hi Andy,

    Thanks for all of your helpful blog posts and videos! Apologies if you've answered this somewhere on your site and I just missed it in my search but what exactly is different about the 2 locations you can enter covariates in a multiple regression analysis in SPM?

    I'm doing a second level analysis and want to see if a blood measure correlates with activation during a particular event and when I run the analysis with the blood level in the 1st covariate spot (underneath "scans"), I get a completely different result than when I run the analysis with the blood level as a covariate in the second spot (following "Design" main heading). Not sure about the difference between each, conceptually, and why its changing the result.

    Hope this makes sense. Thanks for your help!
    Val

    ReplyDelete
    Replies
    1. Hi Val,

      By the different spots, do you mean the Covariates menu, vs. the Multiple Covariates menu? Multiple Covariates requires a matrix of covariates, which may be different from the approach you're doing.

      -Andy

      Delete
  25. This comment has been removed by the author.

    ReplyDelete
  26. Hi Andrew,

    Thank you for the video. When I inspect the second level statistics of a particular covariate (after using the appropriate contrast), SPM returns a T statistic and it's p-value, but I am not sure how these relate to the correlation between my predictor and the 1st lvl contrasts (something related was asked before) or to the explained variance by this covariate. Would you know how to interpret the T value, and is it a meaningful statistic to report?

    Thanks and kind regards
    Matthias

    ReplyDelete
    Replies
    1. Hi Matthias,

      My understanding is that SPM converts the correlation into a t-statistic. You can test this yourself by selecting a single voxel, extracting the data from that voxel, running the correlation, and converting the correlation to a t-statistic to see how it matches up with what SPM reports.

      Best,

      -Andy

      Delete
    2. Hi Andrew,

      Thanks for your quick response. You were exactly right. As for my steps: In the SPM graphics window (inspecting the 2nd lvl results) I extracted the raw data for my favouritve voxel (as quick alternative to spm_get_data) and correlated (corrcoef) the new variable y in the workspace with my empirical covariate I had originally entered into the second level design. With some online tool (googled) I converted the correlation coefficient to a t value which matched the peak-level T value from SPM to the second decimal place, nice.
      I would have assumed the p-value (peak-level uncorrected) would also match, but it was somewhat off, (10e-4 vs 10e-6), but that's negligable for my case I guess.

      As a little side-note: With my 20 subjects, the peak-level T-value of 6.48 for this covariate reached FWE corrected significance (p<.05) only after small volume correction using the entire frontal lobe, but not whole-brain, although t(18) = 6.48 convertes to r = 0.836. I must say any correlation of r>.8 is sort of suspiciously high, given that both my variables are empirical. So I assume that whole brain significance could only be reached by a shaky combination of the true effect and considerable capitalization on chance. At least for my low number of 20 subjects.

      Kind regards and thanks again
      Matthias

      Delete
    3. Hi Matthias,

      That's good to hear that the t-values match up; that was always my hunch, but I never tested it.

      As for your other note, it's not that surprising to find a very high test statistic in your peak voxel. Anything extracted out of a peak voxel that passes a significance threshold, by definition, will be very high. This is known as circular analysis, or double-dipping. It can be interesting to see what is going on in your peak voxel, but I wouldn't report it as a result, since it was a biased selection.

      If you're only looking at one voxel, then it is entirely possible that a t-value of 6.48 wouldn't pass correction. Keep in mind that FWE corrects for every possible test in the brain, which for a mass univariate test is the same as the number of voxels in your second-level mask. Have you tried looking at the cluster corrected results as well? My guess is that your cluster would easily pass whole-brain significance, as there are fewer assumptions about voxels being entirely independent from one another.

      Best,

      -Andy

      Delete
    4. Hi,

      I understand the circularity issue, and at least for my data, this means that a significant voxel (peak-level) could probably never arise from the true effect (where r~.3 is more likely than r>.8), because of the strict correction. So reporting peak-level statistics is a bit weird - a logical fallacy even: it only get's significantly probable, if it's completely unlikely to be true (like r>.8).

      As for cluster-corrected results, it's true that one of my three clusters is a bit extended and therefore highly significant using whole-brain FWE, but I am always reluctant to report this statistics, because I would interpret it as "There is a cluster of voxels, of which none individually is significant, but it forms a blob so, yeah, it's something I guess. A blob of almost nothing. But so much almost-nothing in one spot was unlikely to be random, so I report it". :)
      Also I am looking at effects that are quite narrowly localized, so from my point of view (and hypothesis), the actual size of the cluster is not too informative about what's going on. And for cluster correction, the cluster-volume dictates significance.

      Thank you for all your suggestions and videos. I will keep following your blog :)

      Kind regards
      Matthias

      Delete
  27. Hi Andy,
    Thank you for the helpful videos! You have become my go-to resource for fsl and spm help. I'm setting up 1st level for a seed-based functional connectivity analysis. I have it set up as multiple regression with both time course data and motion parameters entered to the model. My intent is for the time course data to be the predictor and the motion parameters to be covariates of no interest. If my time course data (from my seed region) is entered first, will the output beta 1 file be corrected for the motion parameters? In other words, can I just use the beta1 output moving on to Level 2? Thank you so much!
    Anne

    ReplyDelete
    Replies
    1. Hi Anne,

      Yes, that should be it. If you want to double-check, go into the directory where your 1st-level SPM.mat files are stored, and type:

      SPM.Vbeta.descrip

      Which will list each of your betas and which regressor they correspond to.

      Best,

      -Andy

      Delete
    2. Thanks Andy! Do you happen to know if it matters what order the regressors are entered into the model?

      Delete
    3. I don't believe the order of the regressors matters, unless you're looking at basis functions or parametric modulators attached to a regressor at the same onset (see line 283 of spm_fMRI_design).

      If you want to verify this yourself, look at the resulting parametric maps after reordering your regressors.

      -Andy

      Delete
  28. Hi Andy,

    Thank you for this video! If I want to take age as a covariate, I will get to see linear effects of age only. Is it possible to look at polynomial (or even best fit) effects of age? I think adding age² should cover quadratic effects. But I don't know how to find any more complex polynomial effects. Do you have any idea whether this is possible?

    Kind regards,
    Jen

    ReplyDelete
    Replies
    1. Hi Jen,

      For more complex polynomial effects, can you add more polynomials? E.g., age^3, age^4, and so on. I'm not positive about this, but that seems the most straightforward solution to me.

      -Andy

      Delete
  29. Hi Andy,
    My task is a 1*4 within-subject design with visual stimulus (memory task) varied from 1 to 4. I already have 4 contrast images per subject from 1st level analysis. I also measure the average accuracy of each condition of each subject and the subject's age.
    I want to find the brain region correlated with the subjects' accuracy, how can I setup the 2nd level analysis?

    Thanks
    Ping

    ReplyDelete
    Replies
    1. Hi Ping,

      In the "Specify 2nd-Level" GUI, select Covariate, add new, and then enter the mean accuracy for each subject as a vector. (Make sure that the accuracy values match up with their corresponding fMRI images.) When you set up a contrast, weight that accuracy column as "1", and apply a threshold. The resulting map will show the correlation between that regressor and brain activity.


      Best,

      -Andy

      Delete
  30. Hello Andy,
    This blog is very useful.
    Here's a question.
    How do I know if the model containing both regressors (e.g., age and anxiety) is a good one?
    Instead of knowing the correlation between the main condition and each regressor (age (1 0) OR anxiety (0 1)), I want to know whether these two together contribute greatly to the signal changes. Can I do that?
    Can I do that?

    ReplyDelete
  31. Hello Andy,

    I am now running the multiple regression analysis, but I found three variables have slightly correlation with each other(p=0.089,p=0.075). Do you think I need orthogonalize them? If I need, how could I do? Could I use spm_orth in the script?
    Thank you very much.

    Best regards
    Shirley

    ReplyDelete
    Replies
    1. Hi Shirley,

      My understanding is that correlations above 0.40 or so can make your parameter estimates more variable and less powerful; correlations greater than 0.80 start to point to collinearity, although in some cases the individual parameters can still be estimated.

      I think you're fine with what you have. I would be careful about orthogonalizing; usually it doesn't do what people think it does. See this paper: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0126255


      Best,

      -Andy

      Delete
    2. Hi Andrew!
      So we're looking to run a second level one-sample t test with covariates...
      We have previously found a correlation between ROI_1 activation for a cognitive reappraisal task and a 'change in symptom' score covariate in SPM.
      We wanted to see if another covariate (extracted ROI_2 activation to an emotion processing task) was moderating this result, and so were ran the following test:

      A one sample t-test, where we input our scans for our cognitive reappraisal task, then input covariate 1 as the 'change in symptom' scores, covariate 2 as the extracted activation values, and then a third covariate (which would be mean centred cov1 * mean centred cov2) to determine if there was an interaction between the two covariates.
      When running through SPM we examined the interaction term [0 0 0 -1] revealed a significant result; with less activation in our ROI_1. We then ran the contrast [0 1 0 0] which turned up significant result (increased activation in ROI_1 correlated with ‘change in symptom’ scores), and [0 0 -1 0] which revealed an (almost) significant (decreased activation in ROI1 correlated with activation to emotional faces in ROI_2). Is this appropriate to determine that cov2 is moderating the effect of cov1? If not, what is a more appropriate method?
      Many thanks for your help, in advance!
      Cheers,
      May

      Delete
    3. Hello Andy,
      thank you for your always useful advises.

      Still about collinearity issues: When you mentioned that correlations < 0.4 are ok-ish, and > 0.8 point to collinearity issues, do you have a reference in mind for these values?

      Thank you very much,
      Giuseppe

      Delete
    4. Hi Giuseppe, those are based on conversations I had with the AFNI developers; I don't know if there's a reference for it. All I know is that AFNI will start to throw errors when the correlation is above a certain threshold, which is around 0.8. I would also say that medium-sized correlations can be problematic, which can make your parameter estimates more variable. In any case, it's difficult to avoid any correlation at all, and you'll have to make tradeoffs with your design to both reduce correlations but make the experiment tolerable for the subject.

      -Andy

      Delete
  32. Hi Andrew!
    Just realised I posted my question in response to another persons question, so just copying and pasting it as an actual comment!

    So we're looking to run a second level one-sample t test with covariates...
    We have previously found a correlation between ROI_1 activation for a cognitive reappraisal task and a 'change in symptom' score covariate in SPM.
    We wanted to see if another covariate (extracted ROI_2 activation to an emotion processing task) was moderating this result, and so were ran the following test:

    A one sample t-test, where we input our scans for our cognitive reappraisal task, then input covariate 1 as the 'change in symptom' scores, covariate 2 as the extracted activation values, and then a third covariate (which would be mean centred cov1 * mean centred cov2) to determine if there was an interaction between the two covariates.
    When running through SPM we examined the interaction term [0 0 0 -1] revealed a significant result; with less activation in our ROI_1. We then ran the contrast [0 1 0 0] which turned up significant result (increased activation in ROI_1 correlated with ‘change in symptom’ scores), and [0 0 -1 0] which revealed an (almost) significant (decreased activation in ROI1 correlated with activation to emotional faces in ROI_2). Is this appropriate to determine that cov2 is moderating the effect of cov1? If not, what is a more appropriate method?
    Many thanks for your help, in advance!
    Cheers,
    May

    ReplyDelete
  33. Hello Andy,

    In my research, I applied multiple regression method to find the relationship between parameters and BOLD signal changes. The threshold was set as p<0.05. cluster size=810mm3. No stringent correction method could be used because the voxels would not survive. Do you think whether we could publish the results to journals without correction method? If not, what could I do?

    ReplyDelete
  34. Hi Andy,
    we did this analysis entering a weight for 1 for covariate A (heart rate variability) and 0 for a childhood trauma score. The thing is how to interpret these results, cause this regression gives a negative correlation between SFG and heart rate variability and we dont know whether to say that chilhood trauma is influencing these results.

    Thanks so much for your help, we will really appreciate your feedback!

    Agustina

    ReplyDelete