Friday, August 8, 2014

Parametric Modulation with SPM: Why Order Matters

Those who are the youngest children in the family are keenly aware of the disadvantages of being last in the pecking order; the quality of toys, clothes, and nutrition is often best for the eldest children in the family, and gradually tapers off as you reach the last kids, who are usually left with what are called, in parent euphemisms, "hand-me-downs."

In reality, these are things that should have been discarded several years ago due to being broken, unsafe, containing asbestos, or clearly being overused, such as pairs of underwear that have several holes in them so large that you're unsure of where exactly your legs go.

In SPM land, a similar thing happens to parametric modulators, with later modulators getting the dregs of the variance not wanted by the first modulators. Think of variance as toys and cake, and parametric modulators as children; everybody wants the most variance, but only the first modulators get the lion's share of it; the last modulators get whatever wasn't given to the first modulators.

The reason this happens is because when GLMs are set up, SPM uses a command, spm_orth, to orthogonalize the modulators. The first modulator is essentially untouched, but all other modulators coming after it are orthogonalized with respect to the first; and this proceeds in a stepwise function, with the second modulator getting whatever was unassigned to the first, the third getting whatever was unassigned to the second, and so on.

(You can see this for yourself by doing a couple of quick simulations in Matlab. Type x1=rand(10,1), and x2=rand(10,1), and then combine the two into a matrix with xMat = [x1 x2]. Type spm_orth(xMat), and notice that the first column is unchanged, while the second is greatly altered. You can then flip it around by creating another matrix, xMatRev = [x2 x1], and using spm_orth again to see what effect this change has.)

Be aware of this, as this can dramatically alter your results depending on how your modulators were ordered. (If they were completely orthogonal by design, then the spm_orth command should have virtually no impact; but this scenario is rare.) This came to my attention when looking at a contrast of a parametric modulator, which was one of three assigned to a main regressor. In one GLM, the modulator was last, and correspondingly explained almost no variance in the data; while in another GLM, the modulator was first, and loaded on much more of the brain. The following two figures show the difference, using exactly the same contrast and correction threshold, but simply reversing the order of the modulators:


Results window when parametric modulator was the last one in order.


Results window when parametric modulator was the first one in order.

This can be dealt with in a couple of ways:
  1. Comment out the spm_orth call in two scripts used in setting up the GLM: line 229 in spm_get_ons.m, and lines 285-287 in spm_fMRI_design.m. However, this might affect people who actually do want the orthogonalization on, and also this increases the probability that you will screw something up. Also note that doing this means that the modulators will not get de-meaned when entered into the GLM.
  2. Create your own modulators by de-meaning the regressor, inserting your value at the TR where the modulator occurred, then convolving this with an HRF (usually the one in SPM.xBF.bf). If needed, downsample the output to match the original TR.

It is well to be aware of all this, not only for your own contrasts, but to see how this is done in other papers. I recall reading one a long time ago - I can't remember the name - where several different models were used, in particular different models where the modulators were entered in different orders. Not surprisingly, the results were vastly different, although why this was the case wasn't clearly explained. Just something to keep in mind, as both a reader and a reviewer.

I should also point out that this orthogonalization does not occur in AFNI; the order of modulators has no influence on how they are estimated. A couple of simulations should help illustrate this:

Design matrices from AFNI with simulated data (one regressor, two parametric modulators.) Left: PM1 entered first, PM2 second. Right: Order reversed. The values are the same in both windows, just transposed to illustrate which one was first in the model.

More information on the topic can be found here, and detailed instructions on how to construct your own modulators can be found on Tor Wager's lab website.

19 comments:

  1. Hello,
    Can you set up a similar with multiple parametric modulators in FSL? How would you do that?

    ReplyDelete
    Replies
    1. Hi there, I don't know if FSL supports that capability; as far as I know, you can only give it one parametric modulator.

      -Andy

      Delete
    2. This comment has been removed by the author.

      Delete
    3. Hey Andy, long-time fMRI (primarily FSL) person here. Been banging my head against the "parametric modulation in SPM vs FSL" question for the past few days - your blog has been helpful, as official documentation for either package is sorely lacking. A couple of things (some of which you already know): SPM does appear to mean-center weights for the PM regressor automatically, and you can specify multiple PMs for any "condition," but it does do serial orthogonalization, so beware of interpreting diffs between PM effects. On the other hand, FSL gives you a ton of flexibility - you could have a design matrix made entirely of parametric effects, if you wanted to (not that any reasonable person would recommend that, of course) - and you can specify as many PMs as you want, as well (to address the question above). And, awesomely, you can also totally customize the orthogonalization (e.g. PMa WRT the mean and PMb; PMb WRT the mean and PMa, etc.), with the default being that any shared variance across all model covariates gets dumped onto the error term. However, FSL definitively does NOT mean-center the event weights before constructing model timeseries, so you need to do that yourself. That last thing can really bite a person in the ass, if they are used to the way SPM sets up PM regressors (*cough*). Anyway, thanks for the helpful blog and for filling what the manuals leave out! ~Laura

      Delete
    4. Good summary, Laura. It's true that each package does slightly different things with their parametric modulators - but those differences can have huge effects on the results. I realized this when discussing a paper - I forget the name and the authors - that showed that the first PM was significant, and the others weren't. It was the main point of their paper, but a misleading one, without controlling for how the other PMs with orthogonalized.

      I'll revisit this at some point with a direct comparison of the PMs between all three. When I do, I hope you don't mind if I paraphrase most of what you said.


      Best,

      -Andy

      Delete
    5. Laura - see the Poldrack group paper I posted below - they directly compare SPM and FSL orthogonalization methods.

      Delete
  2. Hello,
    Thanks for this explanation. I have another question:
    How should regressors be mean centered? I mean, do you take the mean from that subject? From that run? From the group?

    ReplyDelete
    Replies
    1. Hey there,

      Parametric modulators should be demeaned for each subject. Covariates for group level analyses, on the other hand - such as age, IQ, or any other variable that has one value per subjects - should be demeaned across the group. However, there are exceptions; see this page for more details: http://mumford.fmripower.org/mean_centering/


      Best,

      -Andy

      Delete
  3. Thank you again, Andrew! I ran into this not to long ago, this paper from the Poldrack group was very helpful as well: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0126255

    ReplyDelete
    Replies
    1. Thanks for the link, Omri! I had also come across that, and it's something everyone should know about. Using multiple modulators is complex, and researchers should understand how to interpret them - which depends on what software you use.

      -Andy

      Delete
  4. I have a related question on parametric modulators

    Ideally I want present PM2, but controlling for PM1, as PM2 variable includes PM 1. To do this we designed two PMs (left all other settings default, including orthogonality), with PM 1 as variable 1 and PM2 as variable 2. Now if I look at PM2, I should get effects purely due to model 2, correct?

    I am slightly confused, as I get very similar maps in both orders.

    Thanks

    ReplyDelete
    Replies
    1. Hey there,

      I'll need some more details: How does PM2 include PM1? What are the variables? Why would you expect them to share variance? It could be that there actually is very little variance shared between them, which is why the maps are similar in both cases.

      -Andy

      Delete
  5. Hi I've been trying to set up a batchjob for the spm_jobman incorporating parametric modulators.

    matlabbatch{i}.spm.stats.fmri_spec.sess(#).cond(#).pmod{#}

    pmod is a cell array holding pmod{1} etc. which are structs.

    Running the jobmanager it will tell me:
    "Can not initialise pmod value(s): job is not a struct."

    It's not easy to find where that check is performed in the SPM scripts, or what it really expects my pmod, or pmod{1} to look like.

    Any tips, or examples would be greatly appreciated.
    Thanks
    Ferdi

    ReplyDelete
    Replies
    1. Okay, figured it out, so for posterity:
      make sure .pmod is a struct as well, even though some sites will tell you that it should be a cell array. So in short .pmod should be a struct containing other structs.

      Delete
    2. Hey Ferdi, I was just about to reply to this; glad to hear you figured it out!

      -Andy

      Delete
  6. Hi Andy,
    two short questions about [Way 1] the turning orthogonalisation off in SPM. If you demean the values. You do NOT have to convolve with the HRF function? That only matters when you create the entire modulated regressor outside of SPM [Way 2]?

    Is there a way in SPM, as a check, to see whether the regressor has been convolved to the HRF function like FSL shows in its design?

    ReplyDelete
    Replies
    1. Hi Ferdi,

      Yes to both of your questions. Demeaning is a safety measure carried out by spm_orth, and if you turn off the orthogonalization, just make sure to demean the values you are parametrically modulating.

      Also, I don't know of a simple method to check whether the regressor has been convolved with the HRF. I assume that any conditions you add to the design matrix (as opposed to "regressors," in SPM lingo) are convolved with the basis function you specify through the GUI. You could also run the model and then load up the SPM.mat file:

      load SPM
      SPM.xsDes.Basis_functions

      And see if it returns 'hrf', or whatever basis function you specified.

      Best,

      -Andy

      Delete
  7. Hi Andy,
    Thanks so much for the super helpful blog and videos.
    I was wondering how I would go about comparing models with different pmods. I have 7 potential modulators and that obviously gives a huge range of potential combinations (and also order effects!). Is there a way to turn them on or off without needing to set up a separate first level analysis for each combination?

    Thanks,
    Maria

    ReplyDelete
    Replies
    1. Hi Maria,

      I don't know of a way to look at only one modulator or subset of modulators; you'll have to create a model that takes into account however many modulators you will be using.

      Best,

      -Andy

      Delete