Model-based FMRI analysis is so hot right now. It's so hot, it could take a crap, wrap it in tin foil, put hooks on it, and sell it as earrings to the Queen of England.* It seems as though every week, I see another model-based study appear in Journal of Neuroscience, Nature Neuroscience, and Humungo Garbanzo BOLD Responses. Obviously, in order to effect our entry into such an elite club, we should understand some of the basics of what it's all about.
When people ask me what I do, I usually reply "Oh, this and that." When pressed for details, I panic and tell them that I do model-based FMRI analysis. In truth, I sit in a room across from the guy who actually does the modeling work, and then simply apply it to my data; very little of what I do requires more than the mental acumen needed to operate a stapler. However, I do have some foggy notions about how it works, so pay heed, lest you stumble and fall when pressed for details about why you do what you do, and are thereupon laughed at for a fool.
Using a model-based analysis is conceptually very similar to what we do with a basic univariate analysis with the canonical Blood Oxygenation Level Dependent (BOLD) response; with the canonical BOLD response, we have a model for what we think the signal should look like in response to an event, either instantaneous or over a longer period of time, often by convolving each event with a mathematically constructed gamma function called the hemodynamic response function (HRF). We then use this to construct an ideal model of what we think the signal at each voxel should look like, and then increase or decrease the height of each HRF in order to optimize the fit of our ideal model to the actual signal observed in each voxel.
Model-based analyses add another layer to this by providing an estimate of how much the height of this HRF can fluctuate (or "modulate") in response to additional continuous (or "parametric") data for each trial, such as reaction time. The model can provide estimates for how much the BOLD signal should vary on a trial-by-trial basis, which are then inserted into the general linear model (GLM) as parametric modulators; the BOLD response can then correlate either positively or negatively with the parametric modulator, signaling whether more or less of that modulator leads to increased or decreased height in the BOLD response.
To illustrate this, a recent paper by Ide et al (2013) applied a Bayesian model to a simple stop-go task, in which participants either had Go trials in which participants made a response, or Stop trials in which participants had to inhibit their response.The stop signal appeared only on a fraction of the trials, but after a variable delay, which made it difficult to predict when the stop signal would occur. The researchers used a Bayesian model in order to update the estimated prior about the occurrence of the stop signal, as well as the probability of committing an error. Think of the model as representing what an ideal subject would do, and try to place yourself in his shoes; after a long string of Go trials, you begin to suspect more and more that the next trial will have a Stop signal. When you are highly certain that a Stop signal will occur, but it doesn't, according to the model that should lead to greater activity, as captured by the parametric modulator generated by the model on that trial. This is applied to each subject and then observed where it is a good fit relative to the observed timecourse at each voxel.
In addition to neuroimaging data, it is also useful to compare model predictions to behavioral data. For reaction time, to use one example, RT should go up as the expectancy for a stop signal also increases, as a subject with a higher subjective probability for a stop signal will take more time in order to avoid committing an error. The overlay of model predictions and behavioral data collected from subjects provides a useful validation check of the model predictions:
Note, however, that this is a Bayesian model as applied to the mind; it's an estimation of what the experimenters think the subject is thinking during the task, given the trial history and what happens on the current trial. In this study, the methods for testing the significance and size of the parameter estimates are still done using null hypothesis significance testing methods.
*cf. Zoolander, 2001
No comments:
Post a Comment