Wednesday, July 30, 2014

Introduction to Diffusion Tensor Imaging: From Hospital Horror Story to Neuroimaging

It is well known that one of the deepest human fears is to be a patient in a hospital late at night, all alone, while a psychotic nurse injects you with a paralyzing agent, opens up your skull with a bone saw, and begins peeling away layers of your cortex while you are still awake.*

As horrifying as this nightmare scenario may be, it also lends an important insight into an increasingly popular neuroimaging method, diffusion tensor imaging (DTI; pronounced "diddy"). To be more gruesomely specific, the psychotic nurse is able to peel away strips of your brain because it has preferred tear directions - just like string cheese. These strips follow the general direction of fascicles, or bundles of nerves, comprising grey and white matter pathways; and of these pathways, it is white matter that tends to exhibit a curious phenomenon called anisotropy.

Imagine, for example, that I release a gas - such as, let's say, deadly neurotoxin - into a spherical compartment, such as a balloon. The gas, through a process called Brownian motion (not to be confused with the Dr. Will Brown frequently mentioned here) will begin to diffuse randomly in all directions; in other words, as though it is unconstrained.

However, release the same gas into a cylindrical or tube-shaped compartment, such as one of those cardboard tubes you used to fight with when you were a kid,** and the gas particles will tend to move along the direction of the tube. This is what is known as anisotropy, meaning that the direction of the diffusion tends to go in a particular direction, as opposed to isotropy, where diffusion occurs in all directions with equal probability.


Left two figures: Ellipsoids showing different amounts of anisotropy, with lambda 1, 2, and 3 symbolizing eigenvalues. Eigenvalues represent the amount of diffusion along a particular direction. Right: A sphere representing isotropy, where diffusion occurs with equal probability in all directions.

This diffusion can be measured in DTI scans, which in turn can be used to indirectly measure white matter integrity and structural connectivity between different areas of the brain. A common use of DTI is to compare different populations, such as young and old, and to observe where fractional anisotropy (FA) differs between groups, possibly with the assumption that less FA can be indicative of less efficient communication between cortical regions. There are other applications as well, but this is the one we will focus on for the remainder of the tutorials.

The data that we will be using can be found on the FSL course website, after scrolling down to Data Files and downloading File 2 (melodic and diffusion). I haven't been able to find any good online repositories for DTI data, so we'll be working with a relatively small sample of three subjects in one group, and three subjects in the other. Also note that while we will focus on FSL, there are many other tools that process DTI data, including some new commands in AFNI, and also a program called TORTOISE. As with most things I post about, these methods have already been covered in detail by others; and in particular I recommend a blog called blog.cogneurostats.com, which covers both the AFNI and TORTOISE approaches to DTI, along with other tutorials that I thought I had been the first to cover, but which have actually already been discussed in detail. I encourage you to check it out - but also to come back, eventually.



*Or maybe that's just me.
**And maybe you still do!

Sunday, July 20, 2014

Quick and Efficient ROI Analysis Using spm_get_data

For any young cognitive neuroscientist out for the main chance, ROI analyses are an indispensable part of the trade, along with having a cool, marketable name, such as Moon Unit or Dweezil Zappa. Therefore, you will find yourself doing a lot of such ROI analyses; and the quicker and more efficient you can do them, with a minimum of error, will allow you to succeed wildly and be able to embark upon an incredible, interesting career for the next four or five decades of your life before you die.

Most ROI analyses in SPM are carried out through the Marsbar toolbox, and for most researchers, that is all they will ever need. However, for those who feel more comfortable with the command line, there is a simple command within the SPM library - spm_get_data - that will make all of your ROI fantasies come true. All the command needs is an array of paths leading to the images you wish to extract data from, along with a matrix of coordinates representing the ROI.

First, the ROI coordinates can be gathered by loading up an arbitrary contrast and selecting an ROI created through, say, Marsbar or wfu_pickatlas. Next, set your corrected p-value threshold to 1; this will guarantee that every voxel in that ROI is "active," which will be recorded by a structure automatically generated each time the Results GUI is opened, a structure called xSPM. One of the fields, xSPM.XYZ, contains the coordinates for each voxel within that ROI. This can then be assigned to a variable, and the same procedure done for however many ROIs you have. The best part, however, is that you only need to do this once for each ROI; once you have assigned it to a variable, you can simply store it in a new .mat file with the "save" command (e.g., save('myROIs.mat', 'M1', 'ACC', 'V1')). These can then be restored to the Matlab workspace at any time by loading that .mat file.

Note: An alternative way to get these coordinates just from the command line would be the following:

Y = spm_read_vols(spm_vol(seedroi),1);
indx = find(Y>0);
[x,y,z] = ind2sub(size(Y),indx);

XYZ = [x y z]';

Where the variable "seedroi" can be looped over a cell containing the paths to each of your ROIs.


The next step is to create your array of images you wish to extract data from - which, conveniently, is stored within the SPM.mat file that is created any time you run a second-level analysis. For example, let's say that I carried out a couple of 2nd-level t-tests, one for left button presses, the other for right button presses. If I go into the folder for left button presses that has already estimated an run a second-level analysis, all of the contrast images that went into that analysis are now stored in SPM.xY.P, which is available in your workspace after simply navigating to the directory containing your SPM.mat file and typing "load SPM".

Lastly, spm_get_data is called to do the ROI analysis by extracting data from each voxel in the ROI for each subject for each contrast, and these are averaged across all of the voxels in that ROI using the "mean" function. Sticking with the current example, let's say have a left M1 region and a right M1 region, the coordinates of which have been extracted using the procedure detailed above, and which have been saved into variables called left_M1 and right_M1, respectively. I then navigate to the left button presses second-level directory, load SPM, and type the following command:

right_M1_leftButtonPress = mean(spm_get_data(SPM.xY.P, right_M1),2)

which returns an array of one value per subject for that contrast averaged across the ROI. You can then easily navigate to another second-level directory - say, right button presses - and, after loading the SPM.mat file, do the same thing:

right_M1_rightButtonPress = mean(spm_get_data(SPM.xY.P, right_M1),2)

T-tests can then be carried out between the sets of parameter or contrast estimates with the t_test function:

[h, p, ci, stats] = ttest(right_M1_leftButtonPress, right_M1_rightButtonPress)

which will return the p-value, confidence interval, and t-value that you would then report in your results.








Thursday, July 17, 2014

Deep Thoughts

During my recent vacation I had the misfortune of witnessing two sublime musical events: One was seeing Natasha Paremski play Rachmaninoff's Third Piano Concerto (immediately dubbed "Rock's Turd" by quick-thinking music critics), and the other was attending a live performance of Mozart's Requiem in D Minor. Both were profound, noble, beautiful beyond compare; but hearing such masterworks also compelled me to evaluate my own life, an exercise I detest. Instead of being able to muse upon my own greatness and simply say to myself whatever words came into my mind - activities which I enjoy heartily - I was instead stunned into silence, and slowly, against my will, forced into the odious task of self-reflection. What have I done so far with my life, I thought, and what will I do with the usual biblical threescore and ten? Mozart and Mendelssohn were composing deathless music while still in their teens; Goethe began incubating the idea for Faust while only a boy; Liszt wrote the first version of his finger-breaking études when he was only fifteen years old. What was I doing around then? Obsessively working on my Starcraft win/loss ratio, and worrying whether my acne and alarming profusion of nasal hair were diagnostic of a thyroid disorder.

The way to happiness, as I am continually reminded, is to actively avoid any contact with anyone greater than you are; and if any acquaintance with a masterwork must be made, if only to be able to talk about it with an air of superficial sophistication, better to read a summary or criticism of it and find out what others have said about it, rather than experience the real thing. Or, if you dare, come into contact with it, but do not linger over it, or let it become part of your internal world, lest you realize how depressingly unfurnished and impoverished it really is. Read Madame Bovary quickly, for example, without pausing to observe how much craftsmanship and precision goes into every sentence, and how carefully weighed is each word; listen to the Jupiter Symphony only to become familiar with the musical themes, without thinking for a moment what kind of miracle this music is, and how no one will ever again write quintuple invertible counterpoint with such elegance and ease.

But should you ever chance to happen upon someone who makes you realize that this person thinks better than you, feels deeper than you, and has a vision of life rising above the paltry, the partial, and the self-defensive - horrors! - merely remind yourself that you would have been the same way, if only you had the advantages this person had while they were growing up, and only if you possessed the same temperament as they do - which, when you think about it, is really just a random occurrence. In this way you can begin to feel a bit better about having contributed so humiliatingly little; in this way, everyone can fancy themselves an embarrassed genius.


=========================

In any case, I've gotten a couple of comments and have made a couple of small changes to the SPM automated first-level script, such as catching whether certain files exist or not, and also dealing with runs that do not contain a particular regressor, which may be present in other runs. According to my imperial tests, it should be able to handle anything you throw at it. Maybe.