Sunday, July 22, 2012

Bayesian Approaches to fMRI: Thoughts

Pictured: Reverend Thomas Bayes, Creator of Bayes' Theorem and Nerd Baller

This summer I have been diligently writing up my qualification exam questions, which will effect my entry into dissertation writing. As part of the qualification exam process I opted to perform a literature review on Bayesian approaches to fMRI, with a focus on spatial priors and parameter estimation at the voxel level. This necessarily included a thorough review of the background of Bayesian inference, over the course of which I gradually became converted to the view that Bayesian inference was, indeed, more useful and more sophisticated than traditional null hypothesis significance testing (NHST) techniques, and that therefore every serious scientist should adopt it as his statistical standard.

At first, I tended to regard practitioners of Bayesian inference as seeming oddities, harmless lunatics so convinced of the superiority of their technique as to come across as almost condescending. Like all good proselytizers, Bayesian practitioners appear to be appalled by the putrid sea of misguided statistical inference in which their entire field had foundered, regarding their benighted colleagues as doomed unless they were injected with the appropriate Bayesian vaccine. And in no place was this zeal more evident than their continual attempts to sap the underlying foundations of NHST and the assumptions on which it rested. At the time I considered these differences between the two approaches to be trivial, mostly because I convinced myself that any overwhelmingly large effect size acquired in NHST would be essentially equivalent to a parameter estimate calculated by the Bayesian approach.

Bayesian Superciliousness Expressed through Ironic T-shirt

However, the more I wrote, the more I thought to myself that proponents of Bayesian methods may be on to something. It finally began to dawn on me that rejecting the null hypothesis in favor of an alternative hypothesis, and actually being able to say something substantive about the alternative hypothesis itself and compare it with a range of other models, are two very different things. Consider the researcher attempting to make the case that shining light in someone's eyes produces activation in the visual cortex. (Also consider the fact that doing such a study in the good old days would get you a paper into Science, and despair.) The null hypothesis is that shining light into someone's eyes should produce no activation. The experiment is carried out, and a significant 1.0% signal change is observed in the visual cortex, with a confidence interval from [0.95, 1.05]. The null hypothesis is rejected and you accept the alternative hypothesis that shining light in someone's eyes elicits greater neural activity in this area than do periods of utter and complete darkness. So far, so good.

Then, suddenly, one of these harmless Bayesian lunatics pops out of the bushes and points out that, although a parameter value has been estimated and a confidence interval calculated stating what range of values would not be rejected by a two-tailed significance test, little has been said about the credibility of your parameter estimate. Furthermore, nothing has been said at all about the credibility of the alternative hypothesis, and how much more believable it should be as compared to the null hypothesis. These words shock you so deeply that you accidentally knock over a nearby jar of Nutella, creating a delicious mess all over your desk and reminding you that you really should screw the cap back on when you are done eating.

Bayesian inference allows the researcher to do all of the above mentioned in the previous paragraph, and more. First, it has the advantage of being uninfluenced by the intentions of the experimenter, the knowledge of which is inherently murky and unclear, but on which NHST "critical" values are based. (More on this aspect of Bayesian inference, as compared to NHST, can be found in a much more detailed post here.) Moreover, Bayesian analysis sheds light on concepts common to both Bayesian and NHST approaches while pointing out the disadvantages of the latter and outlining how these deficiencies are addressed and mitigated in the former, whereas the converse approach is not true; this stems from the fact that Bayesian inference is more mathematically and conceptually coherent, providing a single posterior distribution for each parameter and model estimate without falling back on faulty, overly conservative multiple correction mechanisms which punish scientific curiosity. Lastly, Bayesian inference is more intuitive. We should intuitively expect our prior beliefs to influence our interpretation of posterior estimates, as more extraordinary claims should require correspondingly extraordinary evidence.

Having listened to this rhapsody on the virtues and advantages of going Bayesian, the reader may wonder how many Bayesian tests I have ever performed on my own neuroimaging data. The answer is: None.

Why is this? First of all, considering the fact that a typical fMRI dataset is comprised of hundreds of thousands of voxels, and given current computational capacity, Bayesian inference for an single neuroimaging session can take prohibitively long amounts of time. Furthermore, the only fMRI analysis package I know of that allows for Markov-Chain Monte Carlo (MCMC) sampling at each voxel is FSL's FLAME 1+2, although this procedure can take on the order of days for a single subject, and the results usually tend to be more or less equal to what would be produced through traditional methods. Add on top of this models which combine several levels of priors and hyperparameters which mutually constrain each other, and the computational cost increases even more exponentially. One neuroimaging technique which uses Bayesian inference in the form of spatial priors in order to anatomically constrain the strength and direction of connectivity - an approach known as dynamic causal modeling (DCM; Friston et al, 2003) - is relatively unused among the neuroimaging community, given the complexity of the approach (at least, outside of Friston's group). Because of these reasons, Bayesian inference has not gained much traction in the neuroimaging literature.

However, some statistical packages do allow for the implementation of Bayesian-esque concepts, such as mutually constraining parameter estimates through a process known as shrinkage. While some Bayesian adherents may balk at such weak-willed, namby-pamby compromises, in my experience these compromises can satisfy both the some of the intuitive concepts of Bayesian methods while allowing for more efficient computation time. One example is AFNI's 3dMEMA, which estimates the precision of the estimate for each subject (i.e., the inverse of the variance of that individual's parameter estimate), and weights it in proportion to its precision. For example, a subject with less variance would be weighted more when taken to a group-level analysis, while a subject with a noisy parameter estimate would be weighted less.

Overall, while comprehensive Bayesian inference at the voxel level would be ideal, for right now it appears impractical. Some may take issue with this, but until further technological advances in computer speed or clever methods which allow for more efficient Bayesian inference, current approaches will likely continue.

8 comments:

  1. Who is the imposter in the picture?

    http://www.york.ac.uk/depts/maths/histstat/bayespic.htm

    ReplyDelete
  2. My view is similar, a brilliant statistical method that overcomes many of today's problems with NHST, while not producing so radically different results that we have to strongly doubt its precision or redo all studies not applying bayesian stats ;)

    There is one interim option, which is available in the Lipsia fMRI analysis package (you can fine version 1.6 through neurodebian, while version 2 is being developed currently and available on the MPI site itself: http://www.cbs.mpg.de/institute/software/lipsia/index.html).

    In order to gain some of the advantages of bayesian inference without the huge increase in computational requirements and radically different 1st-level analysis, they apply the bayesian method only on the 2nd and 3rd level analyses, including the possibility of using group comparisons. A short explanation is available in the manual, and a paper has been published on this method: J. Neumann and G. Lohmann. "Bayesian second-level analysis of functional magnetic resonance images." NeuroImage, 20(2), 1346-1355, 2003.

    I am looking into using this method until such a time as computers can deal with bayes... ;)

    ReplyDelete
    Replies
    1. Infinidiv,

      Wow, that is pretty incredible! I would also like to give it a spin and see what happens. Thanks for the heads up!

      Best,

      -Andy

      Delete
  3. which method else we can use instead of Bayesian for parameter estimation in DCM?

    ReplyDelete
    Replies
    1. Hi Malik,

      As far as I know, a Bayesian approach is the default for parameter estimation, and I do not know of any alternatives within DCM. However, I also have not played around with DCM, and so I can't give you a more informed answer; I would write to the SPM listserv to get a definitive response.

      -Andy

      Delete
  4. Hi,

    I know this article was written 2 years ago, so maybe the software has changed since then. I don't know. Anyways, thank you for the article. So, at least now, in FSL, there is the FLAME1 and FLAME1+2 regluarly implemented for seconed level, and at least for my data the resulst are "different" than for OLS (and FLAME1+2 is different than FLAME1).
    But anyways, my question is: Since FLAME is Bayes, I wonder, because I still set up my contrasts as T-Tests or an ANOVA? I have no ideas what I could do, if I woudn't want to do, a T-Test but the bayes-equivalent, because bayes is better? Do you know that or have any tipps where I could possible look that up?

    I also read you article on ROI analyis and I wonder how that relates to Bayes analysis because with Bayes you shouldn't have the multiple comparison problem. But in the end you are still doing T-Tests and ANOVAs, so I guess the problems still exists.

    Thank you very much for your blog and your videos on youtube!

    Pia

    ReplyDelete
    Replies
    1. Hi Pia,

      You're right that the Bayesian application is mainly to the estimates at each voxel for a particular regressor; however, you can somewhat get around the t-test problem by simply taking the contrasts that you want, and then extracting those contrast estimates and using a program like John Kruschke's BEST to analyze the resulting data.

      There may be a way to extend this to multiple ROI analyses in order to mitigate the multiple comparisons problem, but a solution still eludes me. Possibly something to think about more as I drink Sleepy Time Throat Tamer tea to get through this cold I'm dealing with. Probably more information than you needed, but it's true.

      -Andy

      Delete
    2. Hi Andy,
      thank you for your answer, I already buyed the book of John Kruschke. Get well soon!!
      Pia

      Delete