## Thursday, May 28, 2015

## Tuesday, May 26, 2015

### K-Means Analysis with FMRI Data

Clustering, or finding subgroups of data, is an important technique in biostatistics, sociology, neuroscience, and dowsing, allowing one to condense what would be a series of complex interaction terms into a straightforward visualization of which observations tend to cluster together. The following graph, taken from the online Introduction to Statistical Learning in R (ISLR), shows this in a two-dimensional space with a random scattering of observations:

Different colors denote different groups, and the number of groups can be decided by the researcher before performing the k-means clustering algorithm. To visualize how these groups are being formed, imagine an "X" being drawn in the center of mass of each cluster; also known as a centroid, this can be thought of as exerting a gravitational pull on nearby data points - those closer to that centroid will "belong" to that cluster, while other data points will be classified as belonging to the other clusters they are closer to.

This can be applied to FMRI data, where several different columns of data extracted from an ROI, representing different regressors, can be assigned to different categories. If, for example, we are looking for only two distinct clusters and we have several different regressors, then a voxel showing high values for half of the regressors but low values for the other regressors may be assigned to cluster 1, while a voxel showing the opposite pattern would be assigned to cluster 2. The label itself is arbitrary, and is interpreted by the researcher.

To do this in Matlab, all you need is a matrix with data values from your regressors extracted from an ROI (or the whole brain, if you want to expand your search). This is then fed into the kmeans function, which takes as arguments the matrix and the number of clusters you wish to partition it into; for example, kmeans(your_matrix, 3).

This will return a vector of numbers classifying a particular row (i.e., a voxel) as belonging to one of the specified clusters. This vector can then be prefixed to a matrix of the x-, y-, and z-coordinates of your search space, and then written into an image for visualizing the results.

There are a couple of scripts to help out with this: One, createBlankNIFTI.m, which will erase a standardized space image (I suggest a mask output by SPM at its second level) and replace every voxel with zeros, and the other script, createNIFTI.m, will fill in those voxels with your cluster numbers. You should see something like the following (here, I am visualizing it in the AFNI viewer, since it automatically colors in different numbers):

The functions are pasted below, as well as a couple of explanatory videos.

=================================

Different colors denote different groups, and the number of groups can be decided by the researcher before performing the k-means clustering algorithm. To visualize how these groups are being formed, imagine an "X" being drawn in the center of mass of each cluster; also known as a centroid, this can be thought of as exerting a gravitational pull on nearby data points - those closer to that centroid will "belong" to that cluster, while other data points will be classified as belonging to the other clusters they are closer to.

This can be applied to FMRI data, where several different columns of data extracted from an ROI, representing different regressors, can be assigned to different categories. If, for example, we are looking for only two distinct clusters and we have several different regressors, then a voxel showing high values for half of the regressors but low values for the other regressors may be assigned to cluster 1, while a voxel showing the opposite pattern would be assigned to cluster 2. The label itself is arbitrary, and is interpreted by the researcher.

To do this in Matlab, all you need is a matrix with data values from your regressors extracted from an ROI (or the whole brain, if you want to expand your search). This is then fed into the kmeans function, which takes as arguments the matrix and the number of clusters you wish to partition it into; for example, kmeans(your_matrix, 3).

This will return a vector of numbers classifying a particular row (i.e., a voxel) as belonging to one of the specified clusters. This vector can then be prefixed to a matrix of the x-, y-, and z-coordinates of your search space, and then written into an image for visualizing the results.

There are a couple of scripts to help out with this: One, createBlankNIFTI.m, which will erase a standardized space image (I suggest a mask output by SPM at its second level) and replace every voxel with zeros, and the other script, createNIFTI.m, will fill in those voxels with your cluster numbers. You should see something like the following (here, I am visualizing it in the AFNI viewer, since it automatically colors in different numbers):

Sample k-means analysis with k=3 clusters. |

The functions are pasted below, as well as a couple of explanatory videos.

function createBlankNIFTI(imageFile) %Note: Make sure that the image is a copy, and retain the original X = spm_read_vols(spm_vol(imageFile)); X(:,:,:) = 0; spm_write_vol(spm_vol(imageFile), X);

=================================

function createNIFTI(imageFile, textFile) hdr = spm_vol(imageFile); img = spm_read_vols(hdr); fid = fopen(textFile); nrows = numel(cell2mat(textscan(fid,'%1c%*[^\n]'))); fclose(fid); fid = 0; for i = 1:nrows if fid == 0 fid = fopen(textFile); end Z = fscanf(fid, '%g', 4); img(Z(2), Z(3), Z(4)) = Z(1); spm_write_vol(hdr, img); end

```
```

```
```

```
```

` `

```
```

Labels:
analysis,
fMRI,
k-means clustering

## Sunday, May 17, 2015

### Dissertation Defense Post-Mortem

A few weeks ago, I mentioned that I had my dissertation defense coming up; understandably, some of you are probably interested in how that went. I'll spare you the disgusting details, and come out and say that I passed, that I made revisions, submitted them about a week and a half ago, and participated in the graduation ceremony in full regalia, which I discarded afterward in the back of a U-Haul truck for immediate transportation to a delousing facility located somewhere on campus. Given that I was sweating like a skunk for nearly three hours (Indiana has quite a few graduates, it turns out), that's probably a wise choice.

I believe this conveys everything you need to know. Also, it costs considerably less than paying for the professional photos they took during graduation. Don't get me wrong; the ceremony itself was an incredible spectacle, complete with the ceremonial mace, tams and tassels and gowns of all fabrics and colors, and the president of the university wearing a gigantic medallion that makes even the most flamboyantly attired rapper look like a kindergartener. Even for all that, however, I don't believe it justifies photos at $50 a pop.

Currently I am in Los Angeles, after an extended stint in Vancouver Island visiting strange lands and people, touring the famous Butchart Gardens, and feeding already-overfed sea lions the size of airplane turbines. Then it's back to Minneapolis, Chicago, and finally Bloomington to pack up and leave for the East Coast.

For those who need proof that any of this happened, here's a photo:

I believe this conveys everything you need to know. Also, it costs considerably less than paying for the professional photos they took during graduation. Don't get me wrong; the ceremony itself was an incredible spectacle, complete with the ceremonial mace, tams and tassels and gowns of all fabrics and colors, and the president of the university wearing a gigantic medallion that makes even the most flamboyantly attired rapper look like a kindergartener. Even for all that, however, I don't believe it justifies photos at $50 a pop.

Currently I am in Los Angeles, after an extended stint in Vancouver Island visiting strange lands and people, touring the famous Butchart Gardens, and feeding already-overfed sea lions the size of airplane turbines. Then it's back to Minneapolis, Chicago, and finally Bloomington to pack up and leave for the East Coast.

Labels:
ceremony,
good times,
graduation,
nutella,
travel

## Saturday, May 9, 2015

### Leave One Subject Out Cross Validation - The Video

Due to the extraordinary popularity of the leave-one-subject-out (LOSO) post I wrote a couple of years ago, and seeing as how I've been using it lately and want to remember how to do it, here is a short eight-minute video on how to do it in SPM. While the method itself is straightforward enough to follow - GLMs are estimated for each group of subjects excluding one subject, and then estimates are extracted from the resulting ROIs for just that subject - the major difficulty is batching it, especially if there are many subjects.

Unfortunately I haven't been able to figure this out satisfactorily; the only advice I can give is that once you have a script that can run your second-level analysis, loop over it while leaving out consecutive subjects for each GLM. This will leave you with the same number of second-level GLMs as there are subjects, and each of these can be used to load up contrasts and observe the resulting clusters from that analysis. Then you extract data from your ROIs for that subject which was left out for the GLM and build up a vector of datapoints for each subject from each GLM, and do t-tests on it, put chocolate sauce on it, eat it, whatever you want. Seriously. Don't tell me I'm the only one who's thought of this.

Once you have your second-level GLM for each subject, I recommend using the following set of commands to get that subject's unbiased data (I feel slightly ridiculous just writing that: "unbiased data"; as though the data gives a rip about anything one way or the other, aside from maybe wanting to be left alone, and just hang out with its friends):

1. Load up your contrast, selecting your uncorrected p-value and cluster size;

2. Click on your ROI and highlight the corresponding coordinates in the Results windown;

3. Find out what the path is to the contrasts for each subject for that second-level contrast by typing "SPM.xY.P"; that will be the template you will alter to get the single subject's data - for example, "/data/myStudy/subject_101/con_0001.img" - and then you can save this to a variable, such as "subject_101_contrast";

4. Average that subject's data across the unbiased ROI (there it is again! I can't get away from it) using something like "mean(spm_get_data(subject_101_contrast, xSPM.XYZ), 2)";

5. Save the resulting value to a vector, and update this for each additional subject.

Unfortunately I haven't been able to figure this out satisfactorily; the only advice I can give is that once you have a script that can run your second-level analysis, loop over it while leaving out consecutive subjects for each GLM. This will leave you with the same number of second-level GLMs as there are subjects, and each of these can be used to load up contrasts and observe the resulting clusters from that analysis. Then you extract data from your ROIs for that subject which was left out for the GLM and build up a vector of datapoints for each subject from each GLM, and do t-tests on it, put chocolate sauce on it, eat it, whatever you want. Seriously. Don't tell me I'm the only one who's thought of this.

Once you have your second-level GLM for each subject, I recommend using the following set of commands to get that subject's unbiased data (I feel slightly ridiculous just writing that: "unbiased data"; as though the data gives a rip about anything one way or the other, aside from maybe wanting to be left alone, and just hang out with its friends):

1. Load up your contrast, selecting your uncorrected p-value and cluster size;

2. Click on your ROI and highlight the corresponding coordinates in the Results windown;

3. Find out what the path is to the contrasts for each subject for that second-level contrast by typing "SPM.xY.P"; that will be the template you will alter to get the single subject's data - for example, "/data/myStudy/subject_101/con_0001.img" - and then you can save this to a variable, such as "subject_101_contrast";

4. Average that subject's data across the unbiased ROI (there it is again! I can't get away from it) using something like "mean(spm_get_data(subject_101_contrast, xSPM.XYZ), 2)";

5. Save the resulting value to a vector, and update this for each additional subject.

Subscribe to:
Posts (Atom)