Tuesday, October 2, 2012

FMRI Motion Correction: AFNI's 3dvolreg

I. Introduction

The fortress of FMRI is constantly beseiged by enemies. Noisy data lead to difficulties in sifting the gold of signal from the flotsam of noise; ridiculous assumptions are made about blood flow patterns and how they relate to underlying neural activity; and signal is corrupted by motions of the head, whether due to agitation, the sudden and violent ejection of wind, or the attempt to free oneself from such a hideous, noisy, and unnatural environment.

This last besetting weakness is the root of much pain and suffering for neuroimagers. Consider that images are acquired on the order of seconds and strung together as a series of snapshots over a period of minutes. Consider also that we deal with puny, squirmy, weak-willed humans, unable to remain still as death for any duration. Finally, consider that head motion may occur at any time during the acquisition of our images - as though we were using a slow shutter speed to take a picture of a moving target.

Coregistration - the spatial alignment of images - attempts to correct these problems. (Note that the term coregistration encompasses both registration across modalities, such as T2-weighted images to a T1-weighted anatomical, and registration within a single modality. The latter is often referred to as motion correction.) For example, given a time series of T2-weighted images, coregistration will attempt to align all of those images to a reference image. This reference image can be any one of the individual functional images in the time series, although using the functional image acquired closest in time to the anatomical image can lead to better initial alignment. Once a reference image has been chosen, spatial deviations are then calculated between the reference image and all other functional images in the timeseries, each image shifted by the inverse of these calculated distances from the reference image.

II. Rigid-body transformations

It what ways can images deviate from each other? Often we assume that images taken from the same subject can be realigned using rigid-body transformations. This means that the size and shape of the registered images are the same, and only differ in translations along the x, y, and z axes, and in three rotation angles (roll, pitch, and yaw). Each of these can be shown by a simple example. First, locate your head and prepare to move it. Ready?
  1. Fix your vacant stare upon an attractive person in front of you. This can be someone in either a classroom or a workplace setting. While you stare, keep your body still and only move your head to the left and right. This is moving along the x-axis.
  2. While the rest of your body remains immobile, again move your head - this time, directly forward and directly backward. This is moving along the y-axis.
  3. Keep staring. Now extend your neck directly upward, and compress it as you come downward. This is moving along the z-axis.
  4. Are you feeling that telluric connection with her yet? Perhaps these next few moves will get her to notice you. Nod your head vigorously back and forth in a "Yes" motion. This is called the pitch rotation, and will entice her to approach you.
  5. Now, send mixed signals by shaking your head "No". This is called the yaw rotation, and will both confuse her and heighten the sexual tension.
  6. Finally, do something completely different and roll your head to the side as though touching your ears to your shoulders. This is called the roll rotation, and will make her think you either have a rare movement disorder or are batshit insane. Now you are irresistible.
The correct execution of these moves can be found in the following video.

III. 3dvolreg

3dvolreg, the AFNI command to perform motion correction, will estimate spatial deviations between the reference functional image and other functional images using each of the above movement parameters. The deviation for each image is calculated and output into a movement file which can then be used to censor (i.e., remove from the model) timepoints that contain too much motion.

A typical 3dvolreg command requires the following arguments:

  • base (sub-brik): Use this sub-brik of the functional dataset as the reference volume.
  • zpad (n): Pad each volume with n voxels with a value of zero prior to motion correction, then remove them afterward.
  • (Interpolation method): Can be cubic, linear, or heptic; in general, higher-order interpolations are slower but produce better results.
  • (prefix): Label for output dataset.
  • -1Dfile (label): Label for text file containing motion estimates for each volume.
  • -1Dmatrix_save (label): Label for text file containing matrix transformations from each volume to reference volume. Can be used later with 3dAllineate to warp each functional volume to a standard space.
  • (input): Functional volume to be motion-corrected.

Assume that we have already slice-time corrected a dataset, named r01.tshift+orig. Example command for motion correction:
3dvolreg -verbose -zpad 1 -base r01.tshift+orig'[164]' -heptic -prefix r01_MC -1Dfile r01_motion.1D -1Dmatrix_save mat.r01.1D r01.tshift+orig

After you have run motion correction, view the results in the AFNI GUI. (It is helpful to open up two windows, one with the motion-corrected data and one with the non-corrected data.) By selecting the same voxel in each window, note that the values are different. As the motion-corrected data is now slightly shifted and not in the location that was originally sampled, your chosen spatial interpolation method will estimate the intensity at each new voxel by sampling nearby voxels. Lower-order interpolation methods are usually a weighted average over the intensity of immediately neighboring voxels, while higher-order interpolations will use information from a wider range of nearby voxels. Assuming you have a relatively new machine running AFNI, 3dvolreg is wicked fast, so heptic or fourier interpolation is recommended.

Last, AFNI's 1dplot can graph the movement parameters dumped into the .1D files. A special option passed to 1dplot, the -volreg option, will label each column in the .1D file with the appropriate movement label.

Example command:
1dplot -volreg -sepscl r01_motion.1D

IV. Potential Issues

Most realignment programs, including 3dvolreg, use an iterative process: small translations and rotations along the x-, y-, and z-axes are made until a minimum in the cost function is found. However, there is always the danger that this is a local minimum, not a global minimum. In other words, 3dvolreg may think it has done a good job in overlaying one image on top of the other, but a larger movement may have led to an even better fit. As always, look at your data both before and after registration to assess the goodness of fit.

Also note that motions that occur on the scale of less than a TR (e.g., less than 2-3 seconds) cannot be corrected by 3dvolreg, as it assumes that any rigid-body motion occurs across volumes. There are more sophisticated techniques which try to address this, with varying levels of success. For now, accept that your motion correction will never be perfect.


  1. I don't quite get what 3dvolreg is actually comparing and what minimization it is using. But I must say for my data it does a much better job than SPM, which looks at the movement of the center of mass and it somehow always confuses BOLD activation with motion...

    1. I do not know all the details about the volume registration algorithm either; what I understand is that it tries to make the absolute difference between two volumes as low as possible, but I don't know how it accounts for differences in intensity that are unrelated to motion (i.e., the BOLD response you mentioned). In any case, if it seems to work better than SPM's algorithm, then I would definitely include it at that stage in your processing stream.


  2. Hello Andy,
    I am looking for a fmri tool that allows motion correction and slice timing correction to be done simultaneously. Is this 3dvolreg able to do that? I know I can add -tshift option to do slice timing correction as well, but then isn't it just the same like perform 3dTShift then followed by 3dvolreg? Thank you.

    1. Hi fiftarina,

      As far as I know, yes, the -tshift option in 3dvolreg will do the same thing as running 3dTshift first; I think they included it for ease of use, and to save some time. You could compare the two methods side by side, but you should get the same answer.


  3. Hello Andy,
    First, thank you for sharing your knowledge via the blog.

    I have a question. I apply 3d geometric transformations on my volumetric data. I am aware of artifacts that result from interpolation, translitions and rotations. Changing interpolator kernel, blurring and sampling may ways to avoid them.

    Could you explain fast and effective way to removing artifacts for mutual information registration?

    Thank you for your attention, from now.

    1. Hey there,

      I'm unaware of any ways to remove artifacts from registration; although, I have heard that keeping the initial EPI scans can improve registration. These are the first few scans that are usually discarded because their intensity is higher than the rest of the run. They can contain more spatial contrast, and thus give you a better match between your functional and anatomical scans.



  4. Dear Andrew
    Thank you for this tutorial. My subjects tend to move a lot (involuntarily) during data acquisition and I was unable to have a decent motion correction after I applied 3dvolreg. Is there any other advanced technique that you can refer to me ? thank.

  5. Can 3dvolreg be used for two anatomical images (one is referece, one is images to be motion-corrected)?