Friday, November 30, 2012

Senior Cello Recital



Tomorrow, Saturday, December 1st, at 5:00pm, I will be accompanying a cellist for his senior recital at Recital Hall. We've put a lot of work into the program, and we think it'll be a great show! (Actually, it has to be, or we don't get paid.) In any case, the music is guaranteed to entertain, enliven, edify, etiolate, and shock the listener; and we hope that you enjoy it as much as we've enjoyed putting it together!

What: Senior Cello Recital, featuring the music of Bach, Debussy, Respighi, and Schumann
Where: Recital Hall, Jacobs School of Music (1201 E. 3rd Street)
Who: Ryan Fitzpatrick (cello), Andrew Jahn & Wendelen Kwek (piano)

Link to the Facebook invite can be found here; we're working on getting up a livestream, which will be posted as soon as it's available.

Thursday, November 29, 2012

Manual Talairach Normalization in AFNI

Back in olden times, before the invention of modern devices such as computers and slap bracelets, brain researchers relied on standard coordinate systems as a guide to brain anatomy. One of the most enduringly popular of these was the Talairach coordinate system, based on the brain of a deceased elderly Frenchwoman; the origin of this space was located at the anterior commissure, and both the anterior and posterior commissures were then set on an even plane. Other brains could then be similarly oriented, warped, squashed, stretched, and subject to varied forms of torture and abuse until they roughly matched the Frenchwoman's.

These days, we have computer algorithms to do that for us; and although all of the leading FMRI packages have tools to perform these transformations automatically, there are still ways to do it by hand with AFNI. The following tutorial video shows you how to do it in excruciating detail, including how to locate the AC/PC line with ease, how to find the mysterious "Define Markers" button, and why the Big Talairach Box should be checked - no matter what.

Experience the way they used to do it, either out of a desire for nostalgia or masochism. The video is rather long (I try to keep them bite-sized, delicious, and under five minutes), but long procedures require long demonstrations; if nothing else, you may find the nascent stirrings of intimacy you begin to experience with your data a satisfying surrogate for the painful void of intimacy in your own life.



Tuesday, November 27, 2012

SPM: Setting the Origin and Normalization (Feat. Chad)

Of all the preprocessing steps in FMRI data, normalization is most susceptible to errors, failure, mistakes, madness, and demonic possession. This step involves the application of warps (just another term for transformations) of your anatomical and functional datasets in order to match a standardized space; in other words, all of your images will be squarely placed within a bounding box that has the same dimensions for each image, and each image will be oriented similarly.

To visualize this, imagine that you have twenty individual shoes - possibly, those single shoes you find discarded along the highways of America - each corresponding to an individual anatomical image. You also have a shoe box, corresponding to the standardized space, or template. Now, some of the shoes are big, some are small, and some have bizarre contours which prevent their fitting comfortably in the box.

However, due to a perverted Procrustean desire, you want all of those shoes to fit inside the box exactly; each shoe should have the toe and heel just touching the front and back of the box, and the sides of the shoes should barely graze the cardboard. If a particular shoe does not fit these requirements, you make it fit; excess length is hacked off*, while smaller footwear is stretched to the boundaries; extra rubber on the soles is either filed down or padded, until the shoe fits inside the box perfectly; and the resulting shoes, while bearing little similarity to their original shape, will all be roughly the same size.

This, in a nutshell, is what happens during normalization. However, it can easily fail and lead to wonky-looking normalized brains, usually with abnormal skewing of a particular dimension. This can often by explained by a faulty starting location, which can then lead to getting trapped in what is called a local minimum.

To visualize this concept, imagine a boulder rolling down valleys. The lowest point that the boulder can fall into represents the best solution; the boulder - named Chad - is happiest when he is at the lowest point he can find. However, there are several dips and dells and dales and swales that Chad can roll into, and if he doesn't search around far enough, he may imagine himself to be in the lowest place in the valley - even if that is not necessarily the case. In the picture below, let's say that Chad starts between points A and B; if he looks at the two options, he chooses B, since it is lower, and Chad is therefore happier. However, Chad, in his shortsightedness, has failed to look beyond those two options and descry option C, which in truth is the lowest point of all the valleys.



This represents a faulty starting position; and although Chad could extend the range of his search, the range of his gaze, and behold all of the options underneath the pandemonium of the dying sun, this would take far longer. Think of this as corresponding to the search space; expanding this space requires more computing time, which is undesirable.

To mitigate this problem, we can give Chad a hand by placing him in a location where he is more likely to find the optimal solution. For example, let us place Chad closer to C - conceivably, even within C itself - and he will find it much easier to roll his rotund, rocky little body into the soft, warm, womb-like crater of option C, and thus obtain a boulder's beggar's bliss.

(For the mathematically inclined, the contours of the valley represent the cost function; the boulder represents the cost function ratio between the source image and the template image; and each letter (A, B, and C) represents a possible minimum in the cost function.)


As with Chad, so with your anatomical images. It is well for the neuroimager to know that the origin (i.e., coordinates 0,0,0) of both Talairach and MNI space is roughly located at the anterior commissure of the brain; therefore, it behooves you to set the origins of your anatomical images to the anterior commissure as well. The following tutorial will show you how to do this in SPM, where this technique is most important:




Once we have successfully warped our anatomical image to a template space, the reason for coregistration becomes apparent: Since our T2-weighted functional images were in roughly the same space as the anatomical image, we can apply the same warps used on the anatomical image to the functional images. This is where the "Other Images" option comes into play in the SPM interface.



As always, check your registration. Then, check it again. Then, ask someone else to check it. (This is a great way to meet girls.) In particular, check to make sure that the internal structures (such as the ventricles) are properly aligned between the template image and your warped images; matching the internal variability of the template image is much trickier, and therefore much more susceptible to failure - even if the outer boundaries of the brain look as though they match up.


*Actually, it's more accurate to say that it is compressed. However, once I started with the Procrustean thing, I just had to roll with it.
 

Monday, November 26, 2012

Stats Videos (Why do you divide samples by n-1?)

Because FMRI analysis requires a strong statistical background, I've added a couple videos going over the basics of statistical inference, and I use both R and Excel to show the output of certain procedures. In this demo, I go over why the sums of squares of sample populations are divided by n-1; a concept not covered in many statistical textbooks, but an important topic for understanding both statistical inference and where degrees of freedom come from. This isn't a rigorous proof, just a demonstration of why dividing by n-1 is a unbiased estimation of sample variance.


Sunday, November 25, 2012

Cello Unchained: Public Recital


Tomorrow at Boxcar Books in Bloomington, there will be a public studio recital featuring cellists from the Jacobs School of Music. The pieces range from virtuosic showpieces (such as the Popper etudes) to lyrical songs without words, and I will be accompanying several of them. So if you're in the area, feel free to stop by!

Boxcar Books Recital  

When: Monday, November 26th, at 7:00pm
Where: Boxcar Books, 408 E 6th St (right next to Runcible Spoon)
Who: The entire cello studio of Emilio Colón
Link to Facebook invite



Saturday, November 24, 2012

Coregistration Demonstrations

Coregistration - the alignment of two separate modalities, such as T1-weighted and T2-weighted images - is an important precursor to normalization. This is because 1) It aligns both the anatomical and functional images into the same space and orientation; and 2) Because any warps applied to the anatomical image can then be accurately applied to the functional images as well. You can create a homemade demonstration of this yourself, using nothing more than a deck of playing cards, a lemon, and a belt.



However, before doing either coregistration or normalization, often it is useful to manually set the coordinates of the anatomical image (or whichever image you will be warping to a standardized space) so that it is in as close an alignment with the template image as possible. Since the origins of both MNI and Talairach standardized spaces are located approximately at the anterior commissure, the origin of the anatomical image should be placed there as well; this provides a better starting point for the normalization process, and increases the likelihood of success. The following tutorial shows you how to do this, as well as what the anterior commissure looks like.



Once this is done, you are ready to proceed with the coregistration step. Usually the average EPI image - output from the realignment step - will be used as the source image, while the anatomical image will be used as the reference image (the image that is moved around). Then, these warps are applied to the functional images to bring everything into harmonious alignment.


Wednesday, November 21, 2012

Let's Talk about Masks (Live Video)

I've been experimenting more with Camtasia, and I've uploaded a new video showing how masks are drawn on an actual human, rubber brain, which involves the use of R studio, Excel, and colored pens. My hope is that this makes the learning experience more interactive; in addition, you get to see what my mug looks like.


Tuesday, November 20, 2012

FSL Tutorial: Featquery_gui

Now that we've created our masks, we can go ahead and extract data using FSL's featquery tool. You may want to run it from the command line when batching large numbers of subjects, but this tutorial will focus on Featquery_gui, a graphical interface for loading subjects and ROIs, and then performing data extraction from that ROI. The procedure is similar to Marsbar, and I hope that the video is clear on how to do this.

Also, I've attached a Black Dynamite video for your enjoyment. Nothing to do with ROIs, really, but we all need a break now and then.




A Note about FMRI Masks



Now that we have covered how to create masks using three separate software packages - FSL, SPM, and AFNI - I should probably take a step back and talk about what masks are all about. When I first read about masks, all I heard was a bunch of mumbo jumbo about zeros and ones, and unhelpful saran wrap metaphors. While this did remind me to purchase valuable kitchen supplies, it was unhelpful in understanding what a mask was, exactly, and how it was used.

Simply put, a mask is a subset of voxels you wish to analyze. Let's say I'm only interested in the right hemisphere of the brain; to create a mask of the right hemisphere, imagine using a papercutter to split the brain in half, and only taking the right hemisphere for further analysis, while discarding the left hemisphere into the trash can. The generation of masks follows this same logic - only focus on a specific part of the brain, and discard the rest.

Fortunately, we have come a long way since using office supplies to create masks, and now we have computers to do it for us. In order to create a mask using any of the listed software packages, usually you will use a tool to insert "1's" into the voxels that you wish to analyze, and "0's" everywhere else. Then, say that you want to do an ROI analysis only on those voxels that contain "1's". If you are trying to extract contrast estimates for a subject, the contrast estimate at each voxel will be multiplied by the mask, and you will be left with the contrast estimates in the "1's" voxels (since each estimate is being multiplied by 1), and zeros everywhere else.

Furthermore, ROI extraction within a mask often averages the contrast (or parameter) estimates across all of the voxels inside the mask. It is also possible to extract estimates from single voxels or a single triplet of coordinates - just think of this as ROI analysis of a very small mask.

I hope that this clarifies things a bit; I know that it took me a couple of years to wrap my head around the whole concept of masks and ROIs and severing hemispheres from each other. However, once you understand this, the whole process of ROI interrogation becomes much simpler and more intuitive, and analyses become easier to carry out. ROI analysis is the foundation for carrying out more complex analyses, such as double dissociations and connectivity analyses, and it is well to become familiar with this before tackling larger game.

Monday, November 19, 2012

Creating Masks In FSL

Due to a high number of requests (three), I have made some walkthroughs about how to create masks in FSL. There are a few different ways to do this:

  1. Anatomical ROI: These masks are generated from anatomical regions labeled by atlases. For example, you may decide to focus only on voxels within the V1 area of visual cortex. Using an atlas will create a mask of that region, based on the atlas-defined anatomical boundaries in a standardized space.
  2. Functional ROI (or contrast ROI): This is a mask created from a contrast thresholded at a specific statistic value. For example, you may wish to focus only on voxels that pass cluster correction for the contrast of left button presses minus right button presses.
  3. Painting ROIs: This is where the real fun starts; instead of being confined by the limitations of anatomical or contrast boundaries, let your imagination run wild and simply paint where you want to do an ROI analysis. Similar to what you did in first grade, but more high-tech and with less puking after eating your crayons. (Is it my fault that Razzmatazz Red sounds so delicious?)
Demonstrations of each approach can be found in the following videos:

 Anatomical ROIs

Functional ROIs

 ROIs created from FSLview. Pretend like you're Bob Ross.

Thursday, November 15, 2012

Parameter Extraction with MarsBar

Marsbar, a region of interest (ROI) tool interfacing with SPM, is a swiss-army knife of programs for ROI manipulation and data extraction. The most commonly used features of Marsbar are 1) The creation of ROIs from spheres or boxes centered on specified coordinates, and 2) The extraction of parameter or contrast estimates from ROIs. The following video tutorial focuses on the latter, in which parameter estimates for each subject are dumped out from a defined ROI.

For example, say you have two ROIs placed in distinct locations, and you wish to extract parameter estimates from the contrast A-B from each of those ROIs. Marsbar can do this easily, even flippantly, such a saucy and irreverent child it is. After your ROIs have been created, simply specify the SPM design you wish to extract parameter estimates from. In the case of second-level analyses, the SPM.mat files generated by these analyses will contain a number of time points equal to the number of subjects that went into that analysis; where Marsbar comes in is taking all of the parameter estimates for each subject and averages them over the entire ROI, generating a list of averaged parameter values for each subject.

Once this is done, save the results to a .mat file, load the file into memory, and check the output of SPM.marsY.Y (as in, "Why, Black Dynamite? Why?").


More deets can be found in the following tutorial; for a text-based walkthrough, complete with pictures, check out this link. I believe that both of these approaches are valid with both SPM5 and SPM8 distributions; if not, I apologize.

Unlike when Black Dynamite was denied the chance to apologize for the life he took so needlessly.




Wednesday, November 14, 2012

SPM Realign

I've covered motion correction in a previous post, and the concept is the same in SPM as it is in the other major software analysis packages. One difference, however, is that SPM realigns the first volume in each run to the first volume of the first run, and then registers each image in each run to the first volume of that run. This may not seem optimal if the anatomical scan is taken after the last functional run, and thus would be spatially closer to the very last image of the last functional scan; but it's the way SPM operates, and hey, I didn't make it - so deal with it, wuss.

spm_realign and spm_reslice are the command line options to run motion correction, and both the command line and GUI approaches will output a graph of motion parameters in the x-, y-, and z-directions, as well as pitch, roll and raw estimates for each run. The motion of each volume relative to the first volume in that run is output into an rp_*.txt file, which can be used for nuisance regressors to soak up any variance associated with motion. (Does anybody else notice how often people use cleaning metaphors when discussing variance? As though it is some messy substance that needs to be mopped up or soaked up, as opposed to appreciated, cared for, and loved.)

Although most of the defaults are fine, you may want to turn up the interpolation order a few notches if you have the computing power to do it, and don't mind waiting longer for the realignment to complete. The higher the interpolation order you use, the better results you get, but the benefits get smaller the higher you go, as though the return on your realignment begins to diminish. Someone should come up with a name for that phenomenon.

Anyway, here's some sample commands for running realignment from the command line:

P = spm_select('ExtList', pwd, '^ar01.nii', 1:165);
spm_realign(P);
spm_reslice(P);

More details, along with my soothing, anodyne voice, can be found in the following screencasts.


SPM Realign from the GUI


SPM Estimate & Reslice from the command line

Monday, November 12, 2012

Introduction to SPM Marsbar

Marsbar is an extraction tool designed to output beta estimates or contrast estimates from a region of interest (ROI), a cluster of voxels defined either anatomically, or through an independent contrast. I covered this in an earlier post, but thought that this would lend itself better to a bright, vibrant, visual tutorial, rather than the musty arrow charts.

How to define ROIs from coordinates

How to define ROIs from other contrasts

Saturday, November 10, 2012

SPM Jobman


Now that we have created our own .mat files from the SPM GUI and seen how it can be written to the disk, altered, and reloaded back into SPM, the hour is at hand for using the command spm_jobman. This is a command for those eager to disenthrall themselves from the tyranny of graphical interfaces through batching SPM processes from the command line.

I first met spm_jobman - also known as Tim - a few weeks ago at a conference, when I was at the nadir of my sorrows, despairing over whether I would ever be able to run SPM commands without the GUI. Suddenly, like a judge divinely sent in answer to the lamentations of the oppressed, spm_jobman appeared by my side, trig and smartly dressed, and said he would be more than happy to help out; and from my first impression of his bearing and demeanor, I believed I was in the presence of an able and reliable ally. Anyone who has ever met spm_jobman, I believe, has felt the same thing. However, as I learned too late, far from being a delight, he is a charmless psychopath; and he continues to infect my dreams with nameless horrors and the unrelenting screams of the abattoir.

spm_jobman has three main options to choose from: Interactive, serial, and run. After choosing one of these options, for the second argument you enter your jobs structure, which is automatically populated after loading the .mat file from the command line. Interactive will load the traditional GUI with the options filled in from the jobs structure, which you can then modify and execute as you please; Serial will prompt the user to fill in each field, with the defaults set to the values in the jobs structure; and Run will execute the jobs structure without cuing the GUI. For most purposes, if you decide to run spm_jobman at all, you will want to use the Run command, as this allows you to loop processes over subjects without pause, allowing you to do more useful tasks, such as Googling the history of the lint roller.

Saving .mat files from SPM is immensely helpful in understanding the relationship between the .mat files created by SPM, and what exactly goes into them; and this will in turn reinforce your understanding of and ability to manipulate Matlab structures. The following tutorials show how the .mat file is generated from the SPM interface, which can then be used as a template for spm_jobman. I've been working with SPM for years now, but found out about this only recently; and I hope that it helps ease the burden of your SPM endeavors.



Wednesday, November 7, 2012

Slice Timing Correction in SPM

I have posted a couple new videos about slice-timing correction in SPM: One from the GUI, and the other from the command line. The command line usage is more interesting and informative, especially if you aim to batch your preprocessing without using the graphical user interface; and this will be the goal of this series of tutorials.

And just imagine - no more mindless pointing and clicking. No more sore wrists and carpal tunnel syndrome from those long nights of copying and pasting onset times, one after the other, subject after subject, until your mind becomes so warped that you accidentally end up copying and pasting a particularly spicy, shockingly personal, yet oddly poetic missive sent to an ex-girlfriend after quaffing one too many Smirnoff Ices, which then ends up estimating a general linear model of your pathetic and utter wretchedness. Obviously, this analysis will go into the supplementary materials.

To avoid this, slice timing can instead be called by the program spm_slice_timing, which requires the following arguments:

P - A list of files to slice time correct (can select these using spm_select)
sliceOrder - Slice acquisition order
refslice - Reference slice for time zero
timing - requires two arguments: 1) time between slices; and 2) time between last slice and next volume

sliceOrder can be assigned with a Matlab concatenation command. For example, if the slices were acquired in an interleaved order starting with slice 1, and there were 35 slice total, the slice order could be written like this:

sliceOrder = [1:2:35 2:2:35];

Which would return a list of numbers going from 1 to 35 by steps of 2, and then going back and concatenating this with a list of numbers from 2 to 35 by steps of 2.

The timing variable is easy to fill in once you have both the TR and the TA variables. TR is your repetition time - for example, 2 seconds between volumes. TA is defined as TR-(TR/(num. of slices)), which in this case would be 2-(2/35) ~ 1.94. This is the time at which the last slice was acquired; and, since the first slice was acquired at time 0, the time between each slice can be calculated as TA/nSlices, e.g. 1.94/(nSlices-1) = 1.94/34 ~ 0.057 (not significant, but trending towards it). Likewise, the value for the second field can be calculated as TR-TA, which also equals about 0.057. If the variables TR and TA have already been assigned values, then the fields of the timing variable can be filled up:

timing(1) = TA/nslices;
timing(2) = TR-TA;

With this in hand, spm_slice_timing can be filled in as follows:

spm_slice_timing(spm_select('List', pwd, '^r01.nii'), [1:2:35 2:2:35], 0, timing)

Both versions of slice timing correction can be found in the following tutorials:



The GUI version of slice timing correction. This is for little kids and grandmas.


Now we're talking; this is the real sh*t. Henceforth shall you be set on the path towards nerd glory, and your exploits shall be recorded in the blank verse epic, Childe Roland to the Nerd Tower Came.

Sunday, November 4, 2012

Brown Sugar Glazed Salmon

In addition to analyzing FMRI data, I also do other things, such as eating. Recently I came across a wonderful recipe for glazing salmon with a mixture of honey, butter, brown sugar, and Dijon mustard, which turned out to be quick, easy, and delicious. For sides add some mashed potatoes and asparagus, and you have a meal substantial enough to share with guests, or to make leftovers to last for a couple days. The finished product looks like this:



No, wait; wrong picture! It should look something more like this:


Pairs well with pinot grigio, or a forty-ounce of your favorite malt liquor. To enhance the experience, eat while reading a salmon-related paper regarding false positives in neuroimaging data.

The recipe; the paper.

Saturday, November 3, 2012

Checking Image Registration

Visually checking your image registration - in other words, the overlap between images aligned in a common space - is one of the staples of any FMRI analysis pipeline. Sadly, although everyone says that you should do it, not many people go through the trouble of looking of visually inspecting image overlap; even though, in my opinion, I think that checking your image registration is one of the sexiest things you can do. "Sexier than brushing your teeth at least once a week?" Not quite, but we gettin' there!


Example of faulty image registration. Each set of orthogonal views represents a single subject's mask after first-level analysis. The orthogonal views highlighted with the yellow rectangle has suffered - terribly - from an error in image registration during preprocessing, which should be inspected further before doing a higher-level analysis.
In order to increase your attractiveness, I've written up a brief script - possibly my masterpiece - which will allow you to easily check the registration of a specified image. For example, you may want to check the masks for a group of subjects to make sure that they overlap, as a single mask which is far different from the others will lead to a truncated group mask. While not necessarily unsexy, missteps like these will only lead to average attractiveness. In other words, there's not anything really wrong with you, and there might be plenty of people who would settle for you, but...meh.


More details, including a demonstration of the script in action, can be seen in the following video.