This post was orphaned a while back when I was testing links to papers, but I figured I should go back and explain what I was doing. This is a link to Friston's recent Neuroimage paper giving advice to young reviewers for how to reject any paper, regardless of how scientifically and statistically sound it is. His tone is purposely ironic, although he takes a more serious turn at the end in explaining why these rejection criteria are ridiculous.
The link to Friston's paper can be found here.
Thursday, April 19, 2012
Monday, April 16, 2012
SUMA Demo
I've posted a demo of AFNI's surface mapper program, SUMA, over here on my screencast account. Specifically, I talk about how to map volumetric results generated in any fMRI software package (e.g., AFNI, FSL, SPM, BrainVoyager) onto a template surface provided by SUMA.
In this demo, I take second-level results generated in SPM and map them onto a template MNI surface, the N27 brain. All that is needed for doing this is a results dataset, a template underlay anatomical brain to visualize results on (here I use the MNI_caez_N27 brain provided in the AFNI binaries directory under ~/abin), and a folder called suma_mni that contains the .spec files for mapping onto the N27 brain. The suma_mni folder is available for download here. Just download it to the same directory, and you are good to go.
I've outlined the steps in a word document, Volumetric_SUMA.docx, which is available at my website. Please send me any feedback if any of the steps are unclear.
Although this is an incredibly easy way to make great-looking figures, at the same time I would not recommend doing any ROI stats on the results mapped onto a surface using these steps. This is because it is essentially a rough interpolation of which voxel corresponds to which node; if you want to do surface ROI analyses, do all of your preprocessing and statistics on the surface (I may write up a demo of how to do this soon).
In this demo, I take second-level results generated in SPM and map them onto a template MNI surface, the N27 brain. All that is needed for doing this is a results dataset, a template underlay anatomical brain to visualize results on (here I use the MNI_caez_N27 brain provided in the AFNI binaries directory under ~/abin), and a folder called suma_mni that contains the .spec files for mapping onto the N27 brain. The suma_mni folder is available for download here. Just download it to the same directory, and you are good to go.
SPM 2nd-level results mapped onto template surface using AFNI / SUMA |
I've outlined the steps in a word document, Volumetric_SUMA.docx, which is available at my website. Please send me any feedback if any of the steps are unclear.
Although this is an incredibly easy way to make great-looking figures, at the same time I would not recommend doing any ROI stats on the results mapped onto a surface using these steps. This is because it is essentially a rough interpolation of which voxel corresponds to which node; if you want to do surface ROI analyses, do all of your preprocessing and statistics on the surface (I may write up a demo of how to do this soon).
Python SUITE: AFNI_Tools.gz
This is an update from my earlier post about the Python program convertDICOM2AFNI.py; now it is a folder containing a few programs that interact with each other. The rationale was to give the original dicom conversion script a wrapper that can change directory (since I am currently unable to do this from the python interpreter invoked by the Unix shell). All the user needs to do is download the file AFNI_Tools.gz into their home directory, unzip it using "tar -zxvf AFNI_Tools.gz" into your experimental directory, and run it using the command "tcsh convertRaw2AFNI.sh". The user will be prompted for a list of subject IDs, the name of the experimental directory, and will then run the script convertDICOM2AFNI.py inside the raw fMRI data folder for each subject. This in turn will prompt the user for study-specific info, such as number of slices, number of TRs, and so on. Currently, there is no way to account for runs with different numbers of TRs; I may add this function in the future.
A short screencast demo of the suite is available at www.screencast.com/users/Andrew.Jahn/folders/Jahn_Tools (you may need to go full-screen mode in order to see what I'm typing on the command line). What I did not mention in the demo is the subfolder "Paths" within the AFNI_Tools folder. It contains two text files, groupDir.txt and dataDir.txt. groupDir.txt contains the path to the experimental directory, while dataDir.txt contains the path to the output directory that you want for each subject; these should be modified when you download the suite. I've included some documentation that I hope is enough to get people started.
In addition, after converting the raw scanner files to both AFNI and NIFTI format, the program will drive AFNI directly from the command line and allow the user to interactively scroll through each functional dataset. For example, the first run of functional data will be displayed and a video of its timecourse will start playing; this is a good way to visually check for any scanner artifacts or excessive head motion, and deciding what to do with it. You can then hit enter in the terminal window that generated AFNI, and it will skip to the next functional run of data, and so on. The idea is to make it easy to do a first-pass analysis on your raw data and see whether anything is completely out of line.
As always, please send me feedback if you have actually used the program, and what you think about it.
=====================
P.S. Photos from the AFNI bootcamp are now up. I now have photographic evidence that I was at the National Institutes of Health, and not somewhere else, like Aruba.
A short screencast demo of the suite is available at www.screencast.com/users/Andrew.Jahn/folders/Jahn_Tools (you may need to go full-screen mode in order to see what I'm typing on the command line). What I did not mention in the demo is the subfolder "Paths" within the AFNI_Tools folder. It contains two text files, groupDir.txt and dataDir.txt. groupDir.txt contains the path to the experimental directory, while dataDir.txt contains the path to the output directory that you want for each subject; these should be modified when you download the suite. I've included some documentation that I hope is enough to get people started.
In addition, after converting the raw scanner files to both AFNI and NIFTI format, the program will drive AFNI directly from the command line and allow the user to interactively scroll through each functional dataset. For example, the first run of functional data will be displayed and a video of its timecourse will start playing; this is a good way to visually check for any scanner artifacts or excessive head motion, and deciding what to do with it. You can then hit enter in the terminal window that generated AFNI, and it will skip to the next functional run of data, and so on. The idea is to make it easy to do a first-pass analysis on your raw data and see whether anything is completely out of line.
As always, please send me feedback if you have actually used the program, and what you think about it.
=====================
P.S. Photos from the AFNI bootcamp are now up. I now have photographic evidence that I was at the National Institutes of Health, and not somewhere else, like Aruba.
AFNI Bootcamp / Proof that I was not in Aruba |
Saturday, April 14, 2012
Ye Good Olde Days
I've uploaded my powerpoint presentation about what I learned at the AFNI bootcamp; for the slides titled "AFNI Demo", "SUMA Demo", and so on, you will have to use your imagination.
The point of the presentation is that staying close to your data - analyzing it, looking at it, and making decision about what to do with it - are what we are trained to do as cognitive neuroscientists (really, any scientific discipline). The reason I find AFNI to be superior is that it allows the user to do this in a relatively easy way. The only roadblocks are getting acquainted with Unix and shell programming, and also taking the time to get a feel for what looks normal, and what looks potentially troublesome.
Back in the good old days (ca. 2007-2008) we would simply make our scripts from scratch, looking through fMRI textbooks and making judgments about what processing step should go where, and then looking up the relevant commands and options to make that step work. Something would inevitably break, and if you were like me you would spend days or weeks trying to fix it. To make matters worse, if you asked for help from an outside source (such as the message boards), nobody had any idea what you were doing.
The recent scripts containing the "uber" prefix - such as "uber_subject.py", "uber_ttest.py", and so on - have mitigated this problem considerably, generating streamlined scripts that are more or less uniform across users, and therefore easier to compare and troubleshoot. Of course, you still need to go into the generated script and make some modifications here and there, but everything is pretty much in place. It will still suggest that you check each intermediate step, but that becomes easier to ignore once you have a higher-level interface that takes care of all the minor details for you. Like everything else, there are tradeoffs.
The point of the presentation is that staying close to your data - analyzing it, looking at it, and making decision about what to do with it - are what we are trained to do as cognitive neuroscientists (really, any scientific discipline). The reason I find AFNI to be superior is that it allows the user to do this in a relatively easy way. The only roadblocks are getting acquainted with Unix and shell programming, and also taking the time to get a feel for what looks normal, and what looks potentially troublesome.
Back in the good old days (ca. 2007-2008) we would simply make our scripts from scratch, looking through fMRI textbooks and making judgments about what processing step should go where, and then looking up the relevant commands and options to make that step work. Something would inevitably break, and if you were like me you would spend days or weeks trying to fix it. To make matters worse, if you asked for help from an outside source (such as the message boards), nobody had any idea what you were doing.
The recent scripts containing the "uber" prefix - such as "uber_subject.py", "uber_ttest.py", and so on - have mitigated this problem considerably, generating streamlined scripts that are more or less uniform across users, and therefore easier to compare and troubleshoot. Of course, you still need to go into the generated script and make some modifications here and there, but everything is pretty much in place. It will still suggest that you check each intermediate step, but that becomes easier to ignore once you have a higher-level interface that takes care of all the minor details for you. Like everything else, there are tradeoffs.
Labels:
AFNI,
fMRI,
SUMA,
uber_subject.py,
uber_ttest.py
Friday, April 13, 2012
Python Script: convertDICOM2AFNI.py
I've written my first (useful) python script, a small program called convertDICOM2AFNI.py, that looks for DICOM files from our scanner and converts them to AFNI format. Converting raw scanner data to AFNI format using the to3d command is light years (i.e., minutes) faster than SPM's DICOM import, or MRIcron's dcm2nii program. dcm2nii, in particular, usually gave me a bunch of other files that I was never interested in and would convert everything in my raw DICOM folder, even though there were only a few runs that I was interested in looking at. It would also introduce some weird temporal interpolations into the data, modifying the header information to make it appear as though the volumes were slice-time corrected even though they actually were not.
In order to get around this problem, I may add an option to the script which converts to AFNI, and then converts to nifti format using 3dAFNI2NIFTI (although I have no idea what might get lost in the translation; will need to do some testing here).
To use the script, do the following:
1) Create folder containing all raw DICOM files generated from your experiment
2) Write down which session numbers correspond with which functional and anatomical runs (remember: pencil & paper are your friends)
3) Run the script from the raw DICOM directory using the command "python convertDICOM2AFNI.py"
It will ask you a series of questions about which session numbers are your functional runs, and then which session is your anatomical run. If you run the command without defaults (i.e., without using the "-d") flag, it will also ask you the number of slices in the z-direction, the number of TRs, and the length of each TR in milliseconds.
In the future, I plan to have the program automatically read in a defaults file that specifies all of these arguments automatically. However, that also assumes that the user is organized (which I am not). Also, as of this writing, it assumes that the z-slices alternate positively in the z-direction; if this is not true for you, you may need to change the script yourself.
The script is available for download over at my webpage (http://mypage.iu.edu/~ajahn) under "Python Scripts". The actual location of the scripts I write will probably change as I become more organized.
I plan to begin writing a suite of Python programs that I find useful for interacting with both SPM and AFNI data, eventually working up to a GUI sometime in the far future. For now, it will assume a degree of command line proficiency and that you can probably figure things out if the script does not perfectly fit your needs.
===============================
Due to high demand from the Scottish population at IU, a screencast will soon be up detailing how to interpolate volumetric data onto a template surface.
In order to get around this problem, I may add an option to the script which converts to AFNI, and then converts to nifti format using 3dAFNI2NIFTI (although I have no idea what might get lost in the translation; will need to do some testing here).
To use the script, do the following:
1) Create folder containing all raw DICOM files generated from your experiment
2) Write down which session numbers correspond with which functional and anatomical runs (remember: pencil & paper are your friends)
3) Run the script from the raw DICOM directory using the command "python convertDICOM2AFNI.py"
It will ask you a series of questions about which session numbers are your functional runs, and then which session is your anatomical run. If you run the command without defaults (i.e., without using the "-d") flag, it will also ask you the number of slices in the z-direction, the number of TRs, and the length of each TR in milliseconds.
In the future, I plan to have the program automatically read in a defaults file that specifies all of these arguments automatically. However, that also assumes that the user is organized (which I am not). Also, as of this writing, it assumes that the z-slices alternate positively in the z-direction; if this is not true for you, you may need to change the script yourself.
The script is available for download over at my webpage (http://mypage.iu.edu/~ajahn) under "Python Scripts". The actual location of the scripts I write will probably change as I become more organized.
I plan to begin writing a suite of Python programs that I find useful for interacting with both SPM and AFNI data, eventually working up to a GUI sometime in the far future. For now, it will assume a degree of command line proficiency and that you can probably figure things out if the script does not perfectly fit your needs.
===============================
Due to high demand from the Scottish population at IU, a screencast will soon be up detailing how to interpolate volumetric data onto a template surface.
Thursday, April 5, 2012
New Webpage
I have a new webpage over on the IU servers, which contains a professional-looking picture of me and links to stuff that may be useful. Some of the resources I have posted are relatively specific to my lab, so take that into consideration before using them. I am not responsible for whatever happens if you do use them.
Subscribe to:
Posts (Atom)