Nothing more to be said of this; I invite you to listen.Today we played through the Resphigi Andante con Variazioni; I was the orchestral reduction, Ryan played the solo cello part. Unfair that I have never heard this piece before now, as I believe it to be one of the most sublime orchestrations alongside Mendelssohn’s violin concerto and Mozart’s piano concerto in D minor. Those cascading arpeggios of the harp – simulated crudely by the piano in my score – are as the sounds of angels; those double, triple, quadruple stops make one’s pulse quiver with anticipation; those long, broad notes held in delicate balance as the orchestra swells underneath in all its glory and all its grandeur. It is no affectation, no mannerism that my companion began to sing out during one of my interludes; the music is irresistible.
Monday, September 24, 2012
Resphigi: Andante con Variazioni
The following is an excerpt from my journal written during the glowing aftermath of a chamber music rehearsal. It is brief and incoherent, as though written in a daze, but that is what music does to you; it beclouds the mind, stirs the soul, and radiates joy and longing through every fiber of your being. I felt it worth sharing - one of my more sentimental moments:
Labels:
andante con variazioni,
cello,
music,
orchestra,
respighi
Sunday, September 23, 2012
AFNI Tutorial: 3dTcat
AFNI's 3dTcat is used to concatenate datasets. For example, after performing first- and second-level analyses, you may want to join several datasets together in order to extract beta weights or parameter estimates across a range of subjects. Conversely, you may want to create a dataset that contains only a subset of the sub-briks of another dataset. This function is covered in the following AFNI video tutorial.
(N.B.: In AFNI Land, a sub-brik represents an element of an array. With runs of fMRI data, this usually means that each sub-brik is a timepoint; that is, an individual volume. When 3dTcat is used to concatenate sub-briks from multiple datasets containing beta weights, the resulting dataset is a combination of parameter estimates from different subjects, and it falls to you to keep track of which beta weight belongs to which subject. More on this at a later time.)
3dTcat is straightforward to use: Simply supply a prefix for your output dataset, as well as the range of sub-briks you wish to output. A typical 3dTcat command looks like this:
Other patterns can be used as well, such as selecting only certain sub-briks or selecting every other sub-brik. These examples are taken from the help of 3dTcat:
As emphasized in previous posts, you should check your data after running a command. In the video tutorial, we ran 3dTcat on a dataset which had 206 volumes; the resulting dataset chopped off the first two volumes, reducing the volumes in the output dataset to 204. You can quickly check this using 3dinfo with the -nt command, e.g.:
(N.B.: In AFNI Land, a sub-brik represents an element of an array. With runs of fMRI data, this usually means that each sub-brik is a timepoint; that is, an individual volume. When 3dTcat is used to concatenate sub-briks from multiple datasets containing beta weights, the resulting dataset is a combination of parameter estimates from different subjects, and it falls to you to keep track of which beta weight belongs to which subject. More on this at a later time.)
3dTcat is straightforward to use: Simply supply a prefix for your output dataset, as well as the range of sub-briks you wish to output. A typical 3dTcat command looks like this:
3dTcat -prefix r01_cat r01+orig'[2..$]'This command will create a new dataset called "r01_cat", consisting of every sub-brik in r01+orig except for sub-briks 0 and 1 (recall that most AFNI commands associate 0 with the first element in an array). The '..' means "every sub-brik between these two endpoints", and the '$' sign represents the last element in the array (in this example, 205, as there are 206 timepoints; recall again that since 0 is regarded as a timepoint, we subtract 1 from the total number of timepoints to get the last element of the array).
Other patterns can be used as well, such as selecting only certain sub-briks or selecting every other sub-brik. These examples are taken from the help of 3dTcat:
fred+orig[5] ==> use only sub-brick #5
fred+orig[5,9,17] ==> use #5, #9, and #17
fred+orig[5..8] or [5-8] ==> use #5, #6, #7, and #8
fred+orig[5..13(2)] or [5-13(2)] ==> use #5, #7, #9, #11, and #13
As emphasized in previous posts, you should check your data after running a command. In the video tutorial, we ran 3dTcat on a dataset which had 206 volumes; the resulting dataset chopped off the first two volumes, reducing the volumes in the output dataset to 204. You can quickly check this using 3dinfo with the -nt command, e.g.:
3dinfo -nt r01_cat+origThis command will return the number of timepoints (or sub-briks, or elements) in the dataset. This can be a useful tool when you wish to execute conditional statements based on the number of sub-briks in a dataset.
More information on the evils of pre-steady-state volumes can be found here.
Labels:
3dTcat,
AFNI,
discarding,
equilibrium,
steady state
Wednesday, September 19, 2012
AFNI Tutorial: to3d
In the beginning, a young man is placed upon the scanning table as if in sacrifice. He is afraid; there are loud noises; he performs endless repetitions of a task incomprehensible. He thinks only of the coercively high amount of money he is promised in exchange for an hour of meaningless existence.
The scanner sits in silent judgment and marks off the time. The sap of life rushes to the brain, the gradients flip with terrible precision, and all is seen and all is recorded.
Such is the prologue for data collection. Sent straight into the logs of the server: Every slice, every volume, every run. All this should be marked well, as these native elements shall evolve into something far greater.
You will require three ingredients for converting raw scanner data into a basic AFNI dataset. First, the number of slices: Each volume comprises several slices, each of which measures a separate plane. Second, the number of volumes: Each run of data comprises several volumes, each of which measures a separate timepoint. Third, the repetition time: Each volume is acquired after a certain amount of time has elapsed.
Once you have assembled your materials, use to3d to convert the raw data into a BRIK/HEAD dataset. A sample command:
More details and an interactive example can be found in the following video.
The scanner sits in silent judgment and marks off the time. The sap of life rushes to the brain, the gradients flip with terrible precision, and all is seen and all is recorded.
Such is the prologue for data collection. Sent straight into the logs of the server: Every slice, every volume, every run. All this should be marked well, as these native elements shall evolve into something far greater.
You will require three ingredients for converting raw scanner data into a basic AFNI dataset. First, the number of slices: Each volume comprises several slices, each of which measures a separate plane. Second, the number of volumes: Each run of data comprises several volumes, each of which measures a separate timepoint. Third, the repetition time: Each volume is acquired after a certain amount of time has elapsed.
Once you have assembled your materials, use to3d to convert the raw data into a BRIK/HEAD dataset. A sample command:
to3d -prefix r01 -time:zt 50 206 3000 alt+z *000006_*.dcmThis command means: "AFNI, I implore you: Label my output dataset r01; there are 50 slices per volume, 206 volumes per run, and each volume is acquired every 3000 milliseconds; slices are acquired interleaved in the z-direction; and harvest all volumes which contain the pattern 000006_ and end in dcm. Alert me when the evolution is complete."
More details and an interactive example can be found in the following video.
Tuesday, September 18, 2012
Disclaimers
Yesterday I was surprised to find AFNI message boards linking to my first blog post about AFNI. I felt as though the klieg lights had suddenly been turned on me, and that hordes of AFNI nerdlings would soon be funneled into this cramped corner of cyberspace. If you count yourself among their number, then welcome; I hope you enjoy this blog and find it useful.
However, there are a few disclaimers I should state up front:
However, there are a few disclaimers I should state up front:
- I do not work for AFNI; I am merely an enthusiastic amateur. If you post any questions either on this blog or on Youtube I will be more than willing to answer them. However, if it is something over my head that I can't answer, then I will suggest that you try the official AFNI message board - it is policed 24/7 by the AFNI overlords, and they will hunt down and answer your questions with terrifying quickness.
- I am by no means an AFNI or fMRI expert; as far as you're concerned, I could be an SPM saboteur attempting to lead you astray. When I write about something, you should do your own research and come to your own conclusions. That being said, when I do post about certain topics I try to stick to what I know and to come clean about what I don't know. I hope you can appreciate that, being a guy, this is difficult for me.
- This blog is not just about AFNI and fMRI; it is about my brain - it is about life itself. I reserve the right to post about running, music, Nutella, Nutella accessories (including Graham-cracker spoons), books, relationship advice, and other interests. If you have a request about a certain topic, then I will be happy to consider it; however, do not expect this blog to be constrained to any one topic. Like me, it is broad. It sprawls. If you come desiring one thing and one thing only, you will be sorely disappointed; then shall you be cast into outer darkness, and there will be a wailing and gnashing of teeth.
My goal is to identify, target, and remove needless obstacles to understanding. As I have said before, the tutorials are targeted at beginners - though eventually we may work our way up to more sophisticated topics - and I try to present the essential details as clearly as possible. As you may have noticed at some point during your career, there are an elite few who have never had any trouble understanding fMRI analysis; they are disgusting people and should be avoided. For the rest of us, we may require additional tools to help with the basics; and I hope that the tutorials can help with that.
Good luck!
Sunday, September 16, 2012
Slice Timing Correction
fMRI suffers from the disease of temporal uncertainty. The BOLD response is sluggish and unreliable; cognitive processes are variable and are difficult to model; and each slice of a volume is acquired at a different time. This last symptom is addressed by slice-timing correction (STC), which attempts to shift the data acquired at each slice in order to align them at the same time point. Without it, all would be lost.
"Madness!" you cry; "How can we know what happened at a time point that was not directly measured? How can we know anything? Is what I perceive the same as what everybody else perceives?" A valid criticism, but one that has already been hunted down and crushed by temporal interpolation - the estimation of a timepoint by looking at its neighbors. "But how reliable is it? Will the timecourse not be smoothed by simply averaging the neighboring points?" Then use a higher-order interpolation, whelp, and be silent.
The merits of STC have been debated, as well as when it should be used in the preprocessing stream. However, it is generally agreed that STC should be included in order to reduce estimation bias and increase sensitivity (Sladky et al, 2011; Calhoun et al, 2000; Hensen et al, 1999), and that it should occur before volume coregistration or any other spatial interpolations of the data. For example, consider a dataset acquired at an angle from the AC/PC line (cf. Deichmann et al, 2004): If STC is performed after realigning the slices to be parallel to the AC/PC line, then the corresponding slices for each part of the brain are altered and temporal interpolation becomes meaningless; that way lies darkness and suffering.
If unnecessary interpolations offend your sensibilities, other options are available, such as incorporating temporal derivatives into your model or constructing regressors for each slice (Hensen et al, 1999). However, standard STC appears to be the most straightforward approach and the lowest-maintenance relative to the other options.
Slice-Timing Correction in AFNI is done through 3dTshift. Supply it with the following:
Sample command:
More details, along with an interactive example of how STC works, can be found in the following tutorial video.
"Madness!" you cry; "How can we know what happened at a time point that was not directly measured? How can we know anything? Is what I perceive the same as what everybody else perceives?" A valid criticism, but one that has already been hunted down and crushed by temporal interpolation - the estimation of a timepoint by looking at its neighbors. "But how reliable is it? Will the timecourse not be smoothed by simply averaging the neighboring points?" Then use a higher-order interpolation, whelp, and be silent.
The merits of STC have been debated, as well as when it should be used in the preprocessing stream. However, it is generally agreed that STC should be included in order to reduce estimation bias and increase sensitivity (Sladky et al, 2011; Calhoun et al, 2000; Hensen et al, 1999), and that it should occur before volume coregistration or any other spatial interpolations of the data. For example, consider a dataset acquired at an angle from the AC/PC line (cf. Deichmann et al, 2004): If STC is performed after realigning the slices to be parallel to the AC/PC line, then the corresponding slices for each part of the brain are altered and temporal interpolation becomes meaningless; that way lies darkness and suffering.
If unnecessary interpolations offend your sensibilities, other options are available, such as incorporating temporal derivatives into your model or constructing regressors for each slice (Hensen et al, 1999). However, standard STC appears to be the most straightforward approach and the lowest-maintenance relative to the other options.
Slice-Timing Correction in AFNI is done through 3dTshift. Supply it with the following:
- The slice you wish to align to (usually either the first, middle, or last slice);
- The sequence in which the slices are acquired (ascending, descending, sequential, interleaved, etc.);
- Preferred interpolation (the higher-order, the better, with Fourier being the Cadillac of interpolation methods); and
- Prefix for your output dataset.
Sample command:
3dTshift -tzero 0 -tpattern altplus -quintic -prefix tshift [[input dataset goes here]]
More details, along with an interactive example of how STC works, can be found in the following tutorial video.
Labels:
AFNI,
Calhoun,
Deichmann,
fMRI,
FSL,
Hensen,
Sladky,
slice timing correction,
temporal interpolation
Thursday, September 13, 2012
Erotic Neuroimaging Journal Titles
Yes...YES |
With all this talk about sexy results these days, I think that there should be a special line of erotic neuroimaging journals dedicated to publishing only the sexiest, sultriest results. Neuroscientist pornography, if you will.
Some ideas for titles:
-Huge OFC Activations
-Blobs on Brains
-Exploring Extremely Active Regions of Interest
-Humungo Garbanzo BOLD Responses
-Deep Brain Stimulations (subtitle: Only the Hottest, Deepest Stimulations)
-Journal of where they show you those IAPS snaps, and the first one is like a picture of a couple snuggling, and you're like, oh hells yeah, here we go; and then they show you some messed-up photo of a charred corpse or a severed hand or something. The hell is wrong with these people? That stuff is gross; it's GROSS.
Think of the market for this; think of how much wider an audience we could attract if we played up the sexy side of science more. Imagine the thrill, for example, of walking into your advisor's office as he hastily tries to hide a copy of Humungo Garbanzo inside his desk drawer. Life would be fuller and more interesting; the lab atmosphere would be suffused with sexiness and tinged with erotic anticipation; the research process would be transformed into a non-stop bacchanalia. Someone needs to step up and make this happen.
Tuesday, September 11, 2012
AFNI Part 1: Introduction
As promised, we now begin our series of AFNI tutorials. These walkthroughs will be more in-depth than the FSL series, as I am more familiar with AFNI and use it for a greater number of tasks; accordingly, more advanced tools and concepts will be covered.
Using AFNI requires a solid understanding of Unix; the user should know how to write and read conditional statements and for loops, as well as know how to interpret scripts written by others. Furthermore, when confronted with a new or unfamiliar script or command, the user should be able to make an educated guess about what it does. AFNI also demands a sophisticated knowledge of fMRI preprocessing steps and statistical analysis, as AFNI allows the user more opportunity to customize his script.
A few other points about AFNI:
1) There is no release schedule. This means that there is no fixed date for the release of new versions or patches; rather, AFNI responds to user demands on an ad hoc basis. In a sense, all users are beta testers for life. The advantage is that requests are addressed quickly; I once made a feature request at an AFNI bootcamp, and the developers updated the software before I returned home the following week.
2) AFNI is almost entirely run from the command line. In order to make the process less painful, the developers have created "uber" scripts which allow the user to input experiment information through a graphical user interface and generate a preprocessing script. However, these should be treated as templates subject to further alteration.
3) AFNI has a quirky, strange, and, at times, shocking sense of humor. Through clicking on a random hotspot on the AFNI interface, one can choose their favorite Shakespeare sonnet; read through the Declaration of Independence; generate an inspirational quote or receive kind and thoughtful parting words. Do not let this deter you. As you become more proficient with AFNI, and as you gain greater life experience and maturity, the style of the software will become more comprehensible, even enjoyable. It is said that one knows he going insane when what used to be nonsensical gibberish starts to take on profound meaning. So too with AFNI.
The next video will cover the to3d command and the conversion of raw volumetric data into AFNI's BRIK/HEAD format; study this alongside data conversion through mricron, as both produce a similar result and can be used to complement each other. As we progress, we will methodically work through the preprocessing stream and how to visualize the output with AFNI, with an emphasis on detecting artifacts and understanding what is being done at each step. Along the way different AFNI tools and scripts will be broken down and discussed.
At long last my children, we shall take that which is rightfully ours. We shall become as gods among fMRI researchers - wise as serpents, harmless as doves. Seek to understand AFNI with an open heart, and I will gather you unto my terrible and innumerable flesh and hasten your annihilation.
Labels:
AFNI,
fMRI,
mricron,
to3d,
tutorial,
uber_subject.py,
Unix,
walkthrough,
youtube
Subscribe to:
Posts (Atom)